00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2286 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3545 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.101 Fetching changes from the remote Git repository 00:00:00.103 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.152 Using shallow fetch with depth 1 00:00:00.152 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.152 > git --version # timeout=10 00:00:00.200 > git --version # 'git version 2.39.2' 00:00:00.200 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.231 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.231 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.071 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.083 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.095 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:11.095 > git config core.sparsecheckout # timeout=10 00:00:11.107 > git read-tree -mu HEAD # timeout=10 00:00:11.124 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:11.142 Commit message: "packer: Fix typo in a package name" 00:00:11.142 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:11.231 [Pipeline] Start of Pipeline 00:00:11.247 [Pipeline] library 00:00:11.248 Loading library shm_lib@master 00:00:11.249 Library shm_lib@master is cached. Copying from home. 00:00:11.263 [Pipeline] node 00:00:11.276 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:11.278 [Pipeline] { 00:00:11.289 [Pipeline] catchError 00:00:11.291 [Pipeline] { 00:00:11.306 [Pipeline] wrap 00:00:11.316 [Pipeline] { 00:00:11.325 [Pipeline] stage 00:00:11.327 [Pipeline] { (Prologue) 00:00:11.349 [Pipeline] echo 00:00:11.351 Node: VM-host-SM38 00:00:11.359 [Pipeline] cleanWs 00:00:11.372 [WS-CLEANUP] Deleting project workspace... 00:00:11.372 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.379 [WS-CLEANUP] done 00:00:11.609 [Pipeline] setCustomBuildProperty 00:00:11.700 [Pipeline] httpRequest 00:00:12.562 [Pipeline] echo 00:00:12.564 Sorcerer 10.211.164.101 is alive 00:00:12.572 [Pipeline] retry 00:00:12.573 [Pipeline] { 00:00:12.586 [Pipeline] httpRequest 00:00:12.591 HttpMethod: GET 00:00:12.591 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:12.591 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:12.602 Response Code: HTTP/1.1 200 OK 00:00:12.602 Success: Status code 200 is in the accepted range: 200,404 00:00:12.603 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:13.887 [Pipeline] } 00:00:13.905 [Pipeline] // retry 00:00:13.912 [Pipeline] sh 00:00:14.199 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:14.218 [Pipeline] httpRequest 00:00:14.597 [Pipeline] echo 00:00:14.599 Sorcerer 10.211.164.101 is alive 00:00:14.610 [Pipeline] retry 00:00:14.612 [Pipeline] { 00:00:14.629 [Pipeline] httpRequest 00:00:14.634 HttpMethod: GET 00:00:14.635 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:14.636 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:14.646 Response Code: HTTP/1.1 200 OK 00:00:14.646 Success: Status code 200 is in the accepted range: 200,404 00:00:14.647 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:53.750 [Pipeline] } 00:00:53.768 [Pipeline] // retry 00:00:53.776 [Pipeline] sh 00:00:54.068 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:56.615 [Pipeline] sh 00:00:56.899 + git -C spdk log --oneline -n5 00:00:56.899 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:56.899 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:56.899 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:56.899 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:56.899 9469ea403 nvme/fio_plugin: add trim support 00:00:56.920 [Pipeline] writeFile 00:00:56.934 [Pipeline] sh 00:00:57.222 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:57.235 [Pipeline] sh 00:00:57.521 + cat autorun-spdk.conf 00:00:57.521 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.521 SPDK_TEST_NVME=1 00:00:57.521 SPDK_TEST_FTL=1 00:00:57.521 SPDK_TEST_ISAL=1 00:00:57.521 SPDK_RUN_ASAN=1 00:00:57.521 SPDK_RUN_UBSAN=1 00:00:57.521 SPDK_TEST_XNVME=1 00:00:57.521 SPDK_TEST_NVME_FDP=1 00:00:57.521 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.529 RUN_NIGHTLY=1 00:00:57.531 [Pipeline] } 00:00:57.545 [Pipeline] // stage 00:00:57.561 [Pipeline] stage 00:00:57.563 [Pipeline] { (Run VM) 00:00:57.577 [Pipeline] sh 00:00:57.863 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:57.863 + echo 'Start stage prepare_nvme.sh' 00:00:57.863 Start stage prepare_nvme.sh 00:00:57.863 + [[ -n 7 ]] 00:00:57.863 + disk_prefix=ex7 00:00:57.863 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:57.863 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:57.863 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:57.863 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.863 ++ SPDK_TEST_NVME=1 00:00:57.863 ++ SPDK_TEST_FTL=1 00:00:57.863 ++ SPDK_TEST_ISAL=1 00:00:57.863 ++ SPDK_RUN_ASAN=1 00:00:57.863 ++ SPDK_RUN_UBSAN=1 00:00:57.863 ++ SPDK_TEST_XNVME=1 00:00:57.863 ++ SPDK_TEST_NVME_FDP=1 00:00:57.863 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.863 ++ RUN_NIGHTLY=1 00:00:57.863 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:57.863 + nvme_files=() 00:00:57.863 + declare -A nvme_files 00:00:57.863 + backend_dir=/var/lib/libvirt/images/backends 00:00:57.863 + nvme_files['nvme.img']=5G 00:00:57.863 + nvme_files['nvme-cmb.img']=5G 00:00:57.863 + nvme_files['nvme-multi0.img']=4G 00:00:57.863 + nvme_files['nvme-multi1.img']=4G 00:00:57.863 + nvme_files['nvme-multi2.img']=4G 00:00:57.863 + nvme_files['nvme-openstack.img']=8G 00:00:57.863 + nvme_files['nvme-zns.img']=5G 00:00:57.863 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:57.863 + (( SPDK_TEST_FTL == 1 )) 00:00:57.863 + nvme_files["nvme-ftl.img"]=6G 00:00:57.863 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:57.863 + nvme_files["nvme-fdp.img"]=1G 00:00:57.863 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:57.863 + for nvme in "${!nvme_files[@]}" 00:00:57.863 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:57.863 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.863 + for nvme in "${!nvme_files[@]}" 00:00:57.863 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:00:58.125 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:58.125 + for nvme in "${!nvme_files[@]}" 00:00:58.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:58.125 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.125 + for nvme in "${!nvme_files[@]}" 00:00:58.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:58.125 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:58.125 + for nvme in "${!nvme_files[@]}" 00:00:58.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:58.125 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.125 + for nvme in "${!nvme_files[@]}" 00:00:58.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:58.387 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.387 + for nvme in "${!nvme_files[@]}" 00:00:58.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:58.387 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.387 + for nvme in "${!nvme_files[@]}" 00:00:58.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:00:58.387 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:58.387 + for nvme in "${!nvme_files[@]}" 00:00:58.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:58.387 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.387 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:58.387 + echo 'End stage prepare_nvme.sh' 00:00:58.387 End stage prepare_nvme.sh 00:00:58.401 [Pipeline] sh 00:00:58.687 + DISTRO=fedora39 00:00:58.687 + CPUS=10 00:00:58.687 + RAM=12288 00:00:58.687 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:58.687 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:58.687 00:00:58.687 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:58.687 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:58.687 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:58.687 HELP=0 00:00:58.687 DRY_RUN=0 00:00:58.687 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:00:58.687 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:58.687 NVME_AUTO_CREATE=0 00:00:58.687 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:00:58.687 NVME_CMB=,,,, 00:00:58.687 NVME_PMR=,,,, 00:00:58.687 NVME_ZNS=,,,, 00:00:58.687 NVME_MS=true,,,, 00:00:58.687 NVME_FDP=,,,on, 00:00:58.687 SPDK_VAGRANT_DISTRO=fedora39 00:00:58.687 SPDK_VAGRANT_VMCPU=10 00:00:58.687 SPDK_VAGRANT_VMRAM=12288 00:00:58.687 SPDK_VAGRANT_PROVIDER=libvirt 00:00:58.687 SPDK_VAGRANT_HTTP_PROXY= 00:00:58.687 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:58.687 SPDK_OPENSTACK_NETWORK=0 00:00:58.687 VAGRANT_PACKAGE_BOX=0 00:00:58.687 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:58.687 FORCE_DISTRO=true 00:00:58.687 VAGRANT_BOX_VERSION= 00:00:58.687 EXTRA_VAGRANTFILES= 00:00:58.687 NIC_MODEL=e1000 00:00:58.687 00:00:58.687 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:58.687 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:01.237 Bringing machine 'default' up with 'libvirt' provider... 00:01:01.498 ==> default: Creating image (snapshot of base box volume). 00:01:01.761 ==> default: Creating domain with the following settings... 00:01:01.761 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1729109296_a3eea282e5fbbe520a5d 00:01:01.761 ==> default: -- Domain type: kvm 00:01:01.761 ==> default: -- Cpus: 10 00:01:01.761 ==> default: -- Feature: acpi 00:01:01.761 ==> default: -- Feature: apic 00:01:01.761 ==> default: -- Feature: pae 00:01:01.761 ==> default: -- Memory: 12288M 00:01:01.761 ==> default: -- Memory Backing: hugepages: 00:01:01.761 ==> default: -- Management MAC: 00:01:01.761 ==> default: -- Loader: 00:01:01.761 ==> default: -- Nvram: 00:01:01.761 ==> default: -- Base box: spdk/fedora39 00:01:01.761 ==> default: -- Storage pool: default 00:01:01.761 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1729109296_a3eea282e5fbbe520a5d.img (20G) 00:01:01.761 ==> default: -- Volume Cache: default 00:01:01.761 ==> default: -- Kernel: 00:01:01.761 ==> default: -- Initrd: 00:01:01.761 ==> default: -- Graphics Type: vnc 00:01:01.761 ==> default: -- Graphics Port: -1 00:01:01.761 ==> default: -- Graphics IP: 127.0.0.1 00:01:01.761 ==> default: -- Graphics Password: Not defined 00:01:01.761 ==> default: -- Video Type: cirrus 00:01:01.761 ==> default: -- Video VRAM: 9216 00:01:01.761 ==> default: -- Sound Type: 00:01:01.761 ==> default: -- Keymap: en-us 00:01:01.761 ==> default: -- TPM Path: 00:01:01.761 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:01.761 ==> default: -- Command line args: 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:01.761 ==> default: -> value=-drive, 00:01:01.761 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:01.761 ==> default: -> value=-drive, 00:01:01.761 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:01.761 ==> default: -> value=-drive, 00:01:01.761 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.761 ==> default: -> value=-drive, 00:01:01.761 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.761 ==> default: -> value=-drive, 00:01:01.761 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:01.761 ==> default: -> value=-drive, 00:01:01.761 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:01.761 ==> default: -> value=-device, 00:01:01.761 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.761 ==> default: Creating shared folders metadata... 00:01:01.761 ==> default: Starting domain. 00:01:03.677 ==> default: Waiting for domain to get an IP address... 00:01:21.802 ==> default: Waiting for SSH to become available... 00:01:21.802 ==> default: Configuring and enabling network interfaces... 00:01:25.161 default: SSH address: 192.168.121.226:22 00:01:25.161 default: SSH username: vagrant 00:01:25.161 default: SSH auth method: private key 00:01:27.707 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:35.850 ==> default: Mounting SSHFS shared folder... 00:01:37.235 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:37.235 ==> default: Checking Mount.. 00:01:38.179 ==> default: Folder Successfully Mounted! 00:01:38.440 00:01:38.440 SUCCESS! 00:01:38.440 00:01:38.440 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:38.440 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:38.440 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:38.440 00:01:38.450 [Pipeline] } 00:01:38.465 [Pipeline] // stage 00:01:38.474 [Pipeline] dir 00:01:38.475 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:38.476 [Pipeline] { 00:01:38.489 [Pipeline] catchError 00:01:38.491 [Pipeline] { 00:01:38.503 [Pipeline] sh 00:01:38.787 + vagrant ssh-config --host vagrant 00:01:38.787 + sed -ne '/^Host/,$p' 00:01:38.787 + tee ssh_conf 00:01:41.332 Host vagrant 00:01:41.332 HostName 192.168.121.226 00:01:41.332 User vagrant 00:01:41.332 Port 22 00:01:41.332 UserKnownHostsFile /dev/null 00:01:41.332 StrictHostKeyChecking no 00:01:41.332 PasswordAuthentication no 00:01:41.332 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:41.332 IdentitiesOnly yes 00:01:41.332 LogLevel FATAL 00:01:41.332 ForwardAgent yes 00:01:41.332 ForwardX11 yes 00:01:41.332 00:01:41.347 [Pipeline] withEnv 00:01:41.349 [Pipeline] { 00:01:41.364 [Pipeline] sh 00:01:41.650 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:41.650 source /etc/os-release 00:01:41.650 [[ -e /image.version ]] && img=$(< /image.version) 00:01:41.650 # Minimal, systemd-like check. 00:01:41.650 if [[ -e /.dockerenv ]]; then 00:01:41.650 # Clear garbage from the node'\''s name: 00:01:41.650 # agt-er_autotest_547-896 -> autotest_547-896 00:01:41.650 # $HOSTNAME is the actual container id 00:01:41.650 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:41.650 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:41.650 # We can assume this is a mount from a host where container is running, 00:01:41.650 # so fetch its hostname to easily identify the target swarm worker. 00:01:41.650 container="$(< /etc/hostname) ($agent)" 00:01:41.650 else 00:01:41.650 # Fallback 00:01:41.650 container=$agent 00:01:41.650 fi 00:01:41.650 fi 00:01:41.650 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:41.650 ' 00:01:41.927 [Pipeline] } 00:01:41.944 [Pipeline] // withEnv 00:01:41.952 [Pipeline] setCustomBuildProperty 00:01:41.967 [Pipeline] stage 00:01:41.970 [Pipeline] { (Tests) 00:01:41.988 [Pipeline] sh 00:01:42.274 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:42.542 [Pipeline] sh 00:01:42.816 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:43.085 [Pipeline] timeout 00:01:43.086 Timeout set to expire in 50 min 00:01:43.087 [Pipeline] { 00:01:43.101 [Pipeline] sh 00:01:43.381 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:43.954 HEAD is now at 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:43.967 [Pipeline] sh 00:01:44.252 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:44.528 [Pipeline] sh 00:01:44.846 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:45.124 [Pipeline] sh 00:01:45.408 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:45.668 ++ readlink -f spdk_repo 00:01:45.668 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:45.668 + [[ -n /home/vagrant/spdk_repo ]] 00:01:45.669 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:45.669 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:45.669 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:45.669 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:45.669 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:45.669 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:45.669 + cd /home/vagrant/spdk_repo 00:01:45.669 + source /etc/os-release 00:01:45.669 ++ NAME='Fedora Linux' 00:01:45.669 ++ VERSION='39 (Cloud Edition)' 00:01:45.669 ++ ID=fedora 00:01:45.669 ++ VERSION_ID=39 00:01:45.669 ++ VERSION_CODENAME= 00:01:45.669 ++ PLATFORM_ID=platform:f39 00:01:45.669 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:45.669 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:45.669 ++ LOGO=fedora-logo-icon 00:01:45.669 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:45.669 ++ HOME_URL=https://fedoraproject.org/ 00:01:45.669 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:45.669 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:45.669 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:45.669 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:45.669 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:45.669 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:45.669 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:45.669 ++ SUPPORT_END=2024-11-12 00:01:45.669 ++ VARIANT='Cloud Edition' 00:01:45.669 ++ VARIANT_ID=cloud 00:01:45.669 + uname -a 00:01:45.669 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:45.669 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:45.669 Hugepages 00:01:45.669 node hugesize free / total 00:01:45.669 node0 1048576kB 0 / 0 00:01:45.669 node0 2048kB 0 / 0 00:01:45.669 00:01:45.669 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:45.669 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:45.669 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:45.669 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:45.669 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:45.930 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:45.930 + rm -f /tmp/spdk-ld-path 00:01:45.930 + source autorun-spdk.conf 00:01:45.930 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.930 ++ SPDK_TEST_NVME=1 00:01:45.930 ++ SPDK_TEST_FTL=1 00:01:45.930 ++ SPDK_TEST_ISAL=1 00:01:45.930 ++ SPDK_RUN_ASAN=1 00:01:45.930 ++ SPDK_RUN_UBSAN=1 00:01:45.930 ++ SPDK_TEST_XNVME=1 00:01:45.930 ++ SPDK_TEST_NVME_FDP=1 00:01:45.930 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.930 ++ RUN_NIGHTLY=1 00:01:45.930 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:45.930 + [[ -n '' ]] 00:01:45.930 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:45.930 + for M in /var/spdk/build-*-manifest.txt 00:01:45.930 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:45.930 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:45.930 + for M in /var/spdk/build-*-manifest.txt 00:01:45.930 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:45.930 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:45.930 + for M in /var/spdk/build-*-manifest.txt 00:01:45.930 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:45.930 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:45.930 ++ uname 00:01:45.930 + [[ Linux == \L\i\n\u\x ]] 00:01:45.930 + sudo dmesg -T 00:01:45.930 + sudo dmesg --clear 00:01:45.930 + dmesg_pid=4981 00:01:45.930 + [[ Fedora Linux == FreeBSD ]] 00:01:45.930 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:45.930 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:45.930 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:45.930 + [[ -x /usr/src/fio-static/fio ]] 00:01:45.930 + sudo dmesg -Tw 00:01:45.930 + export FIO_BIN=/usr/src/fio-static/fio 00:01:45.930 + FIO_BIN=/usr/src/fio-static/fio 00:01:45.930 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:45.930 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:45.930 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:45.930 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:45.930 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:45.930 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:45.930 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:45.930 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:45.930 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:45.930 Test configuration: 00:01:45.930 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.930 SPDK_TEST_NVME=1 00:01:45.930 SPDK_TEST_FTL=1 00:01:45.930 SPDK_TEST_ISAL=1 00:01:45.930 SPDK_RUN_ASAN=1 00:01:45.930 SPDK_RUN_UBSAN=1 00:01:45.930 SPDK_TEST_XNVME=1 00:01:45.930 SPDK_TEST_NVME_FDP=1 00:01:45.930 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.930 RUN_NIGHTLY=1 20:09:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:45.930 20:09:00 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:45.930 20:09:00 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:45.930 20:09:00 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:45.930 20:09:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.930 20:09:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.930 20:09:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.930 20:09:00 -- paths/export.sh@5 -- $ export PATH 00:01:45.930 20:09:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.930 20:09:00 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:45.930 20:09:00 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:45.930 20:09:00 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1729109340.XXXXXX 00:01:45.930 20:09:00 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1729109340.WcS2Bz 00:01:45.930 20:09:00 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:45.930 20:09:00 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:45.930 20:09:00 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:45.930 20:09:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:45.930 20:09:00 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:45.930 20:09:00 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:45.930 20:09:00 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:45.930 20:09:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.193 20:09:00 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:46.193 20:09:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:46.193 20:09:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:46.193 20:09:00 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:46.193 20:09:00 -- spdk/autobuild.sh@16 -- $ date -u 00:01:46.193 Wed Oct 16 08:09:00 PM UTC 2024 00:01:46.193 20:09:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:46.194 LTS-66-g726a04d70 00:01:46.194 20:09:00 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:46.194 20:09:00 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:46.194 20:09:00 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:46.194 20:09:00 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:46.194 20:09:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.194 ************************************ 00:01:46.194 START TEST asan 00:01:46.194 ************************************ 00:01:46.194 using asan 00:01:46.194 20:09:00 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:46.194 00:01:46.194 real 0m0.000s 00:01:46.194 user 0m0.000s 00:01:46.194 sys 0m0.000s 00:01:46.194 20:09:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:46.194 ************************************ 00:01:46.194 END TEST asan 00:01:46.194 ************************************ 00:01:46.194 20:09:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.194 20:09:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:46.194 20:09:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:46.194 20:09:00 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:46.194 20:09:00 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:46.194 20:09:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.194 ************************************ 00:01:46.194 START TEST ubsan 00:01:46.194 ************************************ 00:01:46.194 using ubsan 00:01:46.194 20:09:00 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:46.194 00:01:46.194 real 0m0.000s 00:01:46.194 user 0m0.000s 00:01:46.194 sys 0m0.000s 00:01:46.194 ************************************ 00:01:46.194 END TEST ubsan 00:01:46.194 ************************************ 00:01:46.194 20:09:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:46.194 20:09:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.194 20:09:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:46.195 20:09:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:46.195 20:09:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:46.195 20:09:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:46.195 20:09:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:46.195 20:09:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:46.195 20:09:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:46.195 20:09:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:46.195 20:09:01 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:46.195 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:46.195 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:46.769 Using 'verbs' RDMA provider 00:01:59.579 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:09.583 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:09.583 Creating mk/config.mk...done. 00:02:09.583 Creating mk/cc.flags.mk...done. 00:02:09.583 Type 'make' to build. 00:02:09.583 20:09:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:09.583 20:09:23 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:09.583 20:09:23 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:09.583 20:09:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.583 ************************************ 00:02:09.583 START TEST make 00:02:09.583 ************************************ 00:02:09.583 20:09:23 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:09.583 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:09.583 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:09.583 meson setup builddir \ 00:02:09.583 -Dwith-libaio=enabled \ 00:02:09.583 -Dwith-liburing=enabled \ 00:02:09.583 -Dwith-libvfn=disabled \ 00:02:09.583 -Dwith-spdk=false && \ 00:02:09.583 meson compile -C builddir && \ 00:02:09.583 cd -) 00:02:09.583 make[1]: Nothing to be done for 'all'. 00:02:12.131 The Meson build system 00:02:12.131 Version: 1.5.0 00:02:12.131 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:12.131 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:12.131 Build type: native build 00:02:12.131 Project name: xnvme 00:02:12.131 Project version: 0.7.3 00:02:12.131 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.131 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.131 Host machine cpu family: x86_64 00:02:12.131 Host machine cpu: x86_64 00:02:12.131 Message: host_machine.system: linux 00:02:12.131 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:12.131 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:12.131 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:12.131 Run-time dependency threads found: YES 00:02:12.131 Has header "setupapi.h" : NO 00:02:12.131 Has header "linux/blkzoned.h" : YES 00:02:12.131 Has header "linux/blkzoned.h" : YES (cached) 00:02:12.131 Has header "libaio.h" : YES 00:02:12.131 Library aio found: YES 00:02:12.131 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.131 Run-time dependency liburing found: YES 2.2 00:02:12.131 Dependency libvfn skipped: feature with-libvfn disabled 00:02:12.131 Run-time dependency appleframeworks found: NO (tried framework) 00:02:12.131 Run-time dependency appleframeworks found: NO (tried framework) 00:02:12.131 Configuring xnvme_config.h using configuration 00:02:12.131 Configuring xnvme.spec using configuration 00:02:12.131 Run-time dependency bash-completion found: YES 2.11 00:02:12.131 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:12.131 Program cp found: YES (/usr/bin/cp) 00:02:12.131 Has header "winsock2.h" : NO 00:02:12.131 Has header "dbghelp.h" : NO 00:02:12.131 Library rpcrt4 found: NO 00:02:12.131 Library rt found: YES 00:02:12.131 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:12.131 Found CMake: /usr/bin/cmake (3.27.7) 00:02:12.131 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:12.131 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:12.131 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:12.131 Build targets in project: 32 00:02:12.131 00:02:12.131 xnvme 0.7.3 00:02:12.131 00:02:12.131 User defined options 00:02:12.131 with-libaio : enabled 00:02:12.131 with-liburing: enabled 00:02:12.131 with-libvfn : disabled 00:02:12.131 with-spdk : false 00:02:12.131 00:02:12.131 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.131 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:12.131 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:12.393 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:12.393 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:12.393 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:12.393 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:12.393 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:12.393 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:12.393 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:12.393 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:12.393 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:12.393 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:12.393 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:12.393 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:12.393 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:12.393 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:12.393 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:12.393 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:12.393 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:12.393 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:12.393 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:12.393 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:12.653 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:12.653 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:12.653 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:12.653 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:12.653 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:12.653 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:12.653 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:12.653 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:12.653 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:12.653 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:12.653 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:12.653 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:12.653 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:12.653 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:12.653 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:12.653 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:12.653 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:12.653 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:12.653 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:12.653 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:12.653 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:12.653 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:12.653 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:12.653 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:12.653 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:12.653 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:12.653 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:12.653 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:12.653 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:12.653 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:12.653 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:12.653 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:12.653 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:12.653 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:12.653 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:12.913 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:12.913 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:12.913 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:12.913 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:12.913 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:12.913 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:12.913 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:12.913 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:12.913 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:12.913 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:12.913 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:12.913 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:12.913 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:12.913 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:12.913 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:12.913 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:12.913 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:12.913 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:12.913 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:12.913 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:12.913 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:12.913 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:12.913 [79/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:12.913 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:12.913 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:13.173 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:13.173 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:13.173 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:13.173 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:13.173 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:13.173 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:13.173 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:13.173 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:13.173 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:13.173 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:13.173 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:13.173 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:13.173 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:13.173 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:13.173 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:13.173 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:13.173 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:13.173 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:13.173 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:13.173 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:13.173 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:13.173 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:13.173 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:13.173 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:13.173 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:13.173 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:13.173 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:13.173 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:13.173 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:13.173 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:13.173 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:13.173 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:13.173 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:13.453 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:13.453 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:13.453 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:13.453 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:13.453 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:13.453 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:13.453 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:13.453 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:13.453 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:13.453 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:13.453 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:13.453 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:13.453 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:13.454 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:13.454 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:13.454 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:13.454 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:13.454 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:13.454 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:13.454 [134/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:13.454 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:13.454 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:13.454 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:13.454 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:13.454 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:13.454 [140/203] Linking target lib/libxnvme.so 00:02:13.454 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:13.454 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:13.454 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:13.454 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:13.715 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:13.715 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:13.715 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:13.715 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:13.715 [149/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:13.715 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:13.715 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:13.715 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:13.715 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:13.715 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:13.715 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:13.715 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:13.715 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:13.715 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:13.715 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:13.715 [160/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:13.715 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:13.715 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:13.976 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:13.976 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:13.976 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:13.976 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:13.976 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:13.976 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:13.976 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:13.976 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:13.976 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:13.976 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:13.976 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:13.976 [174/203] Linking static target lib/libxnvme.a 00:02:14.236 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:14.236 [176/203] Linking target tests/xnvme_tests_scc 00:02:14.236 [177/203] Linking target tests/xnvme_tests_lblk 00:02:14.236 [178/203] Linking target tests/xnvme_tests_enum 00:02:14.236 [179/203] Linking target tests/xnvme_tests_cli 00:02:14.236 [180/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:14.236 [181/203] Linking target tests/xnvme_tests_znd_state 00:02:14.236 [182/203] Linking target tests/xnvme_tests_buf 00:02:14.236 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:02:14.236 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:14.236 [185/203] Linking target tests/xnvme_tests_znd_append 00:02:14.236 [186/203] Linking target tests/xnvme_tests_ioworker 00:02:14.236 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:14.236 [188/203] Linking target tests/xnvme_tests_kvs 00:02:14.236 [189/203] Linking target tools/xnvme 00:02:14.236 [190/203] Linking target tools/lblk 00:02:14.236 [191/203] Linking target tools/xdd 00:02:14.236 [192/203] Linking target tools/xnvme_file 00:02:14.236 [193/203] Linking target examples/xnvme_io_async 00:02:14.236 [194/203] Linking target tools/zoned 00:02:14.236 [195/203] Linking target examples/xnvme_dev 00:02:14.236 [196/203] Linking target tests/xnvme_tests_map 00:02:14.236 [197/203] Linking target examples/xnvme_enum 00:02:14.236 [198/203] Linking target tools/kvs 00:02:14.236 [199/203] Linking target examples/xnvme_hello 00:02:14.236 [200/203] Linking target examples/xnvme_single_async 00:02:14.236 [201/203] Linking target examples/xnvme_single_sync 00:02:14.236 [202/203] Linking target examples/zoned_io_async 00:02:14.236 [203/203] Linking target examples/zoned_io_sync 00:02:14.236 INFO: autodetecting backend as ninja 00:02:14.236 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:14.236 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:19.528 The Meson build system 00:02:19.528 Version: 1.5.0 00:02:19.528 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:19.528 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:19.528 Build type: native build 00:02:19.528 Program cat found: YES (/usr/bin/cat) 00:02:19.528 Project name: DPDK 00:02:19.528 Project version: 23.11.0 00:02:19.528 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:19.528 C linker for the host machine: cc ld.bfd 2.40-14 00:02:19.528 Host machine cpu family: x86_64 00:02:19.528 Host machine cpu: x86_64 00:02:19.528 Message: ## Building in Developer Mode ## 00:02:19.528 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:19.528 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:19.528 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:19.528 Program python3 found: YES (/usr/bin/python3) 00:02:19.528 Program cat found: YES (/usr/bin/cat) 00:02:19.528 Compiler for C supports arguments -march=native: YES 00:02:19.528 Checking for size of "void *" : 8 00:02:19.528 Checking for size of "void *" : 8 (cached) 00:02:19.528 Library m found: YES 00:02:19.528 Library numa found: YES 00:02:19.528 Has header "numaif.h" : YES 00:02:19.528 Library fdt found: NO 00:02:19.528 Library execinfo found: NO 00:02:19.528 Has header "execinfo.h" : YES 00:02:19.528 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:19.528 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:19.528 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:19.528 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:19.528 Run-time dependency openssl found: YES 3.1.1 00:02:19.528 Run-time dependency libpcap found: YES 1.10.4 00:02:19.528 Has header "pcap.h" with dependency libpcap: YES 00:02:19.528 Compiler for C supports arguments -Wcast-qual: YES 00:02:19.528 Compiler for C supports arguments -Wdeprecated: YES 00:02:19.528 Compiler for C supports arguments -Wformat: YES 00:02:19.528 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:19.528 Compiler for C supports arguments -Wformat-security: NO 00:02:19.528 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.528 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:19.528 Compiler for C supports arguments -Wnested-externs: YES 00:02:19.528 Compiler for C supports arguments -Wold-style-definition: YES 00:02:19.528 Compiler for C supports arguments -Wpointer-arith: YES 00:02:19.528 Compiler for C supports arguments -Wsign-compare: YES 00:02:19.528 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:19.528 Compiler for C supports arguments -Wundef: YES 00:02:19.528 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.528 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:19.528 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:19.528 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.528 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:19.528 Program objdump found: YES (/usr/bin/objdump) 00:02:19.528 Compiler for C supports arguments -mavx512f: YES 00:02:19.528 Checking if "AVX512 checking" compiles: YES 00:02:19.528 Fetching value of define "__SSE4_2__" : 1 00:02:19.528 Fetching value of define "__AES__" : 1 00:02:19.528 Fetching value of define "__AVX__" : 1 00:02:19.528 Fetching value of define "__AVX2__" : 1 00:02:19.528 Fetching value of define "__AVX512BW__" : 1 00:02:19.528 Fetching value of define "__AVX512CD__" : 1 00:02:19.528 Fetching value of define "__AVX512DQ__" : 1 00:02:19.528 Fetching value of define "__AVX512F__" : 1 00:02:19.528 Fetching value of define "__AVX512VL__" : 1 00:02:19.528 Fetching value of define "__PCLMUL__" : 1 00:02:19.528 Fetching value of define "__RDRND__" : 1 00:02:19.528 Fetching value of define "__RDSEED__" : 1 00:02:19.528 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:19.528 Fetching value of define "__znver1__" : (undefined) 00:02:19.528 Fetching value of define "__znver2__" : (undefined) 00:02:19.528 Fetching value of define "__znver3__" : (undefined) 00:02:19.528 Fetching value of define "__znver4__" : (undefined) 00:02:19.528 Library asan found: YES 00:02:19.528 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:19.528 Message: lib/log: Defining dependency "log" 00:02:19.528 Message: lib/kvargs: Defining dependency "kvargs" 00:02:19.528 Message: lib/telemetry: Defining dependency "telemetry" 00:02:19.528 Library rt found: YES 00:02:19.528 Checking for function "getentropy" : NO 00:02:19.528 Message: lib/eal: Defining dependency "eal" 00:02:19.528 Message: lib/ring: Defining dependency "ring" 00:02:19.528 Message: lib/rcu: Defining dependency "rcu" 00:02:19.528 Message: lib/mempool: Defining dependency "mempool" 00:02:19.528 Message: lib/mbuf: Defining dependency "mbuf" 00:02:19.528 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:19.528 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:19.528 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:19.528 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:19.528 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:19.528 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:19.528 Compiler for C supports arguments -mpclmul: YES 00:02:19.528 Compiler for C supports arguments -maes: YES 00:02:19.528 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.528 Compiler for C supports arguments -mavx512bw: YES 00:02:19.528 Compiler for C supports arguments -mavx512dq: YES 00:02:19.528 Compiler for C supports arguments -mavx512vl: YES 00:02:19.528 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:19.528 Compiler for C supports arguments -mavx2: YES 00:02:19.528 Compiler for C supports arguments -mavx: YES 00:02:19.529 Message: lib/net: Defining dependency "net" 00:02:19.529 Message: lib/meter: Defining dependency "meter" 00:02:19.529 Message: lib/ethdev: Defining dependency "ethdev" 00:02:19.529 Message: lib/pci: Defining dependency "pci" 00:02:19.529 Message: lib/cmdline: Defining dependency "cmdline" 00:02:19.529 Message: lib/hash: Defining dependency "hash" 00:02:19.529 Message: lib/timer: Defining dependency "timer" 00:02:19.529 Message: lib/compressdev: Defining dependency "compressdev" 00:02:19.529 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:19.529 Message: lib/dmadev: Defining dependency "dmadev" 00:02:19.529 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:19.529 Message: lib/power: Defining dependency "power" 00:02:19.529 Message: lib/reorder: Defining dependency "reorder" 00:02:19.529 Message: lib/security: Defining dependency "security" 00:02:19.529 Has header "linux/userfaultfd.h" : YES 00:02:19.529 Has header "linux/vduse.h" : YES 00:02:19.529 Message: lib/vhost: Defining dependency "vhost" 00:02:19.529 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:19.529 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:19.529 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:19.529 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:19.529 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:19.529 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:19.529 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:19.529 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:19.529 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:19.529 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:19.529 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:19.529 Configuring doxy-api-html.conf using configuration 00:02:19.529 Configuring doxy-api-man.conf using configuration 00:02:19.529 Program mandb found: YES (/usr/bin/mandb) 00:02:19.529 Program sphinx-build found: NO 00:02:19.529 Configuring rte_build_config.h using configuration 00:02:19.529 Message: 00:02:19.529 ================= 00:02:19.529 Applications Enabled 00:02:19.529 ================= 00:02:19.529 00:02:19.529 apps: 00:02:19.529 00:02:19.529 00:02:19.529 Message: 00:02:19.529 ================= 00:02:19.529 Libraries Enabled 00:02:19.529 ================= 00:02:19.529 00:02:19.529 libs: 00:02:19.529 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:19.529 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:19.529 cryptodev, dmadev, power, reorder, security, vhost, 00:02:19.529 00:02:19.529 Message: 00:02:19.529 =============== 00:02:19.529 Drivers Enabled 00:02:19.529 =============== 00:02:19.529 00:02:19.529 common: 00:02:19.529 00:02:19.529 bus: 00:02:19.529 pci, vdev, 00:02:19.529 mempool: 00:02:19.529 ring, 00:02:19.529 dma: 00:02:19.529 00:02:19.529 net: 00:02:19.529 00:02:19.529 crypto: 00:02:19.529 00:02:19.529 compress: 00:02:19.529 00:02:19.529 vdpa: 00:02:19.529 00:02:19.529 00:02:19.529 Message: 00:02:19.529 ================= 00:02:19.529 Content Skipped 00:02:19.529 ================= 00:02:19.529 00:02:19.529 apps: 00:02:19.529 dumpcap: explicitly disabled via build config 00:02:19.529 graph: explicitly disabled via build config 00:02:19.529 pdump: explicitly disabled via build config 00:02:19.529 proc-info: explicitly disabled via build config 00:02:19.529 test-acl: explicitly disabled via build config 00:02:19.529 test-bbdev: explicitly disabled via build config 00:02:19.529 test-cmdline: explicitly disabled via build config 00:02:19.529 test-compress-perf: explicitly disabled via build config 00:02:19.529 test-crypto-perf: explicitly disabled via build config 00:02:19.529 test-dma-perf: explicitly disabled via build config 00:02:19.529 test-eventdev: explicitly disabled via build config 00:02:19.529 test-fib: explicitly disabled via build config 00:02:19.529 test-flow-perf: explicitly disabled via build config 00:02:19.529 test-gpudev: explicitly disabled via build config 00:02:19.529 test-mldev: explicitly disabled via build config 00:02:19.529 test-pipeline: explicitly disabled via build config 00:02:19.529 test-pmd: explicitly disabled via build config 00:02:19.529 test-regex: explicitly disabled via build config 00:02:19.529 test-sad: explicitly disabled via build config 00:02:19.529 test-security-perf: explicitly disabled via build config 00:02:19.529 00:02:19.529 libs: 00:02:19.529 metrics: explicitly disabled via build config 00:02:19.529 acl: explicitly disabled via build config 00:02:19.529 bbdev: explicitly disabled via build config 00:02:19.529 bitratestats: explicitly disabled via build config 00:02:19.529 bpf: explicitly disabled via build config 00:02:19.529 cfgfile: explicitly disabled via build config 00:02:19.529 distributor: explicitly disabled via build config 00:02:19.529 efd: explicitly disabled via build config 00:02:19.529 eventdev: explicitly disabled via build config 00:02:19.529 dispatcher: explicitly disabled via build config 00:02:19.529 gpudev: explicitly disabled via build config 00:02:19.529 gro: explicitly disabled via build config 00:02:19.529 gso: explicitly disabled via build config 00:02:19.529 ip_frag: explicitly disabled via build config 00:02:19.529 jobstats: explicitly disabled via build config 00:02:19.529 latencystats: explicitly disabled via build config 00:02:19.529 lpm: explicitly disabled via build config 00:02:19.529 member: explicitly disabled via build config 00:02:19.529 pcapng: explicitly disabled via build config 00:02:19.529 rawdev: explicitly disabled via build config 00:02:19.529 regexdev: explicitly disabled via build config 00:02:19.529 mldev: explicitly disabled via build config 00:02:19.529 rib: explicitly disabled via build config 00:02:19.529 sched: explicitly disabled via build config 00:02:19.529 stack: explicitly disabled via build config 00:02:19.529 ipsec: explicitly disabled via build config 00:02:19.529 pdcp: explicitly disabled via build config 00:02:19.529 fib: explicitly disabled via build config 00:02:19.529 port: explicitly disabled via build config 00:02:19.529 pdump: explicitly disabled via build config 00:02:19.529 table: explicitly disabled via build config 00:02:19.529 pipeline: explicitly disabled via build config 00:02:19.529 graph: explicitly disabled via build config 00:02:19.529 node: explicitly disabled via build config 00:02:19.529 00:02:19.529 drivers: 00:02:19.529 common/cpt: not in enabled drivers build config 00:02:19.529 common/dpaax: not in enabled drivers build config 00:02:19.529 common/iavf: not in enabled drivers build config 00:02:19.529 common/idpf: not in enabled drivers build config 00:02:19.529 common/mvep: not in enabled drivers build config 00:02:19.529 common/octeontx: not in enabled drivers build config 00:02:19.529 bus/auxiliary: not in enabled drivers build config 00:02:19.529 bus/cdx: not in enabled drivers build config 00:02:19.529 bus/dpaa: not in enabled drivers build config 00:02:19.529 bus/fslmc: not in enabled drivers build config 00:02:19.529 bus/ifpga: not in enabled drivers build config 00:02:19.529 bus/platform: not in enabled drivers build config 00:02:19.529 bus/vmbus: not in enabled drivers build config 00:02:19.529 common/cnxk: not in enabled drivers build config 00:02:19.529 common/mlx5: not in enabled drivers build config 00:02:19.529 common/nfp: not in enabled drivers build config 00:02:19.529 common/qat: not in enabled drivers build config 00:02:19.529 common/sfc_efx: not in enabled drivers build config 00:02:19.529 mempool/bucket: not in enabled drivers build config 00:02:19.529 mempool/cnxk: not in enabled drivers build config 00:02:19.529 mempool/dpaa: not in enabled drivers build config 00:02:19.529 mempool/dpaa2: not in enabled drivers build config 00:02:19.529 mempool/octeontx: not in enabled drivers build config 00:02:19.529 mempool/stack: not in enabled drivers build config 00:02:19.529 dma/cnxk: not in enabled drivers build config 00:02:19.529 dma/dpaa: not in enabled drivers build config 00:02:19.529 dma/dpaa2: not in enabled drivers build config 00:02:19.529 dma/hisilicon: not in enabled drivers build config 00:02:19.529 dma/idxd: not in enabled drivers build config 00:02:19.529 dma/ioat: not in enabled drivers build config 00:02:19.529 dma/skeleton: not in enabled drivers build config 00:02:19.529 net/af_packet: not in enabled drivers build config 00:02:19.529 net/af_xdp: not in enabled drivers build config 00:02:19.529 net/ark: not in enabled drivers build config 00:02:19.529 net/atlantic: not in enabled drivers build config 00:02:19.529 net/avp: not in enabled drivers build config 00:02:19.529 net/axgbe: not in enabled drivers build config 00:02:19.529 net/bnx2x: not in enabled drivers build config 00:02:19.529 net/bnxt: not in enabled drivers build config 00:02:19.529 net/bonding: not in enabled drivers build config 00:02:19.529 net/cnxk: not in enabled drivers build config 00:02:19.529 net/cpfl: not in enabled drivers build config 00:02:19.529 net/cxgbe: not in enabled drivers build config 00:02:19.529 net/dpaa: not in enabled drivers build config 00:02:19.529 net/dpaa2: not in enabled drivers build config 00:02:19.529 net/e1000: not in enabled drivers build config 00:02:19.529 net/ena: not in enabled drivers build config 00:02:19.529 net/enetc: not in enabled drivers build config 00:02:19.529 net/enetfec: not in enabled drivers build config 00:02:19.529 net/enic: not in enabled drivers build config 00:02:19.529 net/failsafe: not in enabled drivers build config 00:02:19.529 net/fm10k: not in enabled drivers build config 00:02:19.529 net/gve: not in enabled drivers build config 00:02:19.529 net/hinic: not in enabled drivers build config 00:02:19.529 net/hns3: not in enabled drivers build config 00:02:19.529 net/i40e: not in enabled drivers build config 00:02:19.529 net/iavf: not in enabled drivers build config 00:02:19.529 net/ice: not in enabled drivers build config 00:02:19.529 net/idpf: not in enabled drivers build config 00:02:19.529 net/igc: not in enabled drivers build config 00:02:19.529 net/ionic: not in enabled drivers build config 00:02:19.529 net/ipn3ke: not in enabled drivers build config 00:02:19.529 net/ixgbe: not in enabled drivers build config 00:02:19.529 net/mana: not in enabled drivers build config 00:02:19.529 net/memif: not in enabled drivers build config 00:02:19.529 net/mlx4: not in enabled drivers build config 00:02:19.529 net/mlx5: not in enabled drivers build config 00:02:19.529 net/mvneta: not in enabled drivers build config 00:02:19.529 net/mvpp2: not in enabled drivers build config 00:02:19.529 net/netvsc: not in enabled drivers build config 00:02:19.529 net/nfb: not in enabled drivers build config 00:02:19.529 net/nfp: not in enabled drivers build config 00:02:19.529 net/ngbe: not in enabled drivers build config 00:02:19.529 net/null: not in enabled drivers build config 00:02:19.529 net/octeontx: not in enabled drivers build config 00:02:19.529 net/octeon_ep: not in enabled drivers build config 00:02:19.529 net/pcap: not in enabled drivers build config 00:02:19.530 net/pfe: not in enabled drivers build config 00:02:19.530 net/qede: not in enabled drivers build config 00:02:19.530 net/ring: not in enabled drivers build config 00:02:19.530 net/sfc: not in enabled drivers build config 00:02:19.530 net/softnic: not in enabled drivers build config 00:02:19.530 net/tap: not in enabled drivers build config 00:02:19.530 net/thunderx: not in enabled drivers build config 00:02:19.530 net/txgbe: not in enabled drivers build config 00:02:19.530 net/vdev_netvsc: not in enabled drivers build config 00:02:19.530 net/vhost: not in enabled drivers build config 00:02:19.530 net/virtio: not in enabled drivers build config 00:02:19.530 net/vmxnet3: not in enabled drivers build config 00:02:19.530 raw/*: missing internal dependency, "rawdev" 00:02:19.530 crypto/armv8: not in enabled drivers build config 00:02:19.530 crypto/bcmfs: not in enabled drivers build config 00:02:19.530 crypto/caam_jr: not in enabled drivers build config 00:02:19.530 crypto/ccp: not in enabled drivers build config 00:02:19.530 crypto/cnxk: not in enabled drivers build config 00:02:19.530 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.530 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.530 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.530 crypto/mlx5: not in enabled drivers build config 00:02:19.530 crypto/mvsam: not in enabled drivers build config 00:02:19.530 crypto/nitrox: not in enabled drivers build config 00:02:19.530 crypto/null: not in enabled drivers build config 00:02:19.530 crypto/octeontx: not in enabled drivers build config 00:02:19.530 crypto/openssl: not in enabled drivers build config 00:02:19.530 crypto/scheduler: not in enabled drivers build config 00:02:19.530 crypto/uadk: not in enabled drivers build config 00:02:19.530 crypto/virtio: not in enabled drivers build config 00:02:19.530 compress/isal: not in enabled drivers build config 00:02:19.530 compress/mlx5: not in enabled drivers build config 00:02:19.530 compress/octeontx: not in enabled drivers build config 00:02:19.530 compress/zlib: not in enabled drivers build config 00:02:19.530 regex/*: missing internal dependency, "regexdev" 00:02:19.530 ml/*: missing internal dependency, "mldev" 00:02:19.530 vdpa/ifc: not in enabled drivers build config 00:02:19.530 vdpa/mlx5: not in enabled drivers build config 00:02:19.530 vdpa/nfp: not in enabled drivers build config 00:02:19.530 vdpa/sfc: not in enabled drivers build config 00:02:19.530 event/*: missing internal dependency, "eventdev" 00:02:19.530 baseband/*: missing internal dependency, "bbdev" 00:02:19.530 gpu/*: missing internal dependency, "gpudev" 00:02:19.530 00:02:19.530 00:02:19.530 Build targets in project: 84 00:02:19.530 00:02:19.530 DPDK 23.11.0 00:02:19.530 00:02:19.530 User defined options 00:02:19.530 buildtype : debug 00:02:19.530 default_library : shared 00:02:19.530 libdir : lib 00:02:19.530 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:19.530 b_sanitize : address 00:02:19.530 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:19.530 c_link_args : 00:02:19.530 cpu_instruction_set: native 00:02:19.530 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:19.530 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:19.530 enable_docs : false 00:02:19.530 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:19.530 enable_kmods : false 00:02:19.530 tests : false 00:02:19.530 00:02:19.530 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.791 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:20.052 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:20.052 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:20.052 [3/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:20.052 [4/264] Linking static target lib/librte_kvargs.a 00:02:20.052 [5/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:20.052 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:20.052 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:20.052 [8/264] Linking static target lib/librte_log.a 00:02:20.052 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:20.052 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:20.052 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:20.313 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:20.313 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:20.313 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:20.313 [15/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.313 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:20.313 [17/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:20.313 [18/264] Linking static target lib/librte_telemetry.a 00:02:20.574 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:20.574 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:20.574 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:20.574 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:20.574 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:20.574 [24/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.836 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:20.836 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:20.836 [27/264] Linking target lib/librte_log.so.24.0 00:02:20.836 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:20.836 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:20.836 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:20.836 [31/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:21.096 [32/264] Linking target lib/librte_kvargs.so.24.0 00:02:21.096 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:21.096 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:21.096 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:21.096 [36/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.096 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:21.096 [38/264] Linking target lib/librte_telemetry.so.24.0 00:02:21.096 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:21.096 [40/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:21.096 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:21.357 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:21.357 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:21.357 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:21.357 [45/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:21.357 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:21.357 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:21.357 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:21.618 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:21.618 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.618 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:21.618 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:21.618 [53/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:21.618 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:21.618 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.618 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:21.618 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:21.879 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:21.879 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:21.879 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:21.879 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:21.879 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:21.879 [63/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.879 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.879 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:22.140 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:22.140 [67/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:22.140 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:22.140 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.140 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:22.401 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.401 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.401 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:22.401 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:22.401 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:22.401 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:22.401 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.401 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:22.401 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:22.662 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:22.662 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:22.662 [82/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:22.662 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:22.662 [84/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.662 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:22.662 [86/264] Linking static target lib/librte_ring.a 00:02:22.662 [87/264] Linking static target lib/librte_eal.a 00:02:22.662 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.922 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.922 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:22.922 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.922 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.922 [93/264] Linking static target lib/librte_mempool.a 00:02:22.922 [94/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:22.922 [95/264] Linking static target lib/librte_rcu.a 00:02:23.183 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:23.183 [97/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.183 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:23.183 [99/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.444 [100/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.444 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.444 [102/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.444 [103/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.444 [104/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.444 [105/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.444 [106/264] Linking static target lib/librte_meter.a 00:02:23.444 [107/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:23.704 [108/264] Linking static target lib/librte_net.a 00:02:23.704 [109/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:23.704 [110/264] Linking static target lib/librte_mbuf.a 00:02:23.704 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.704 [112/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.704 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.704 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:23.704 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.965 [116/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.965 [117/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.965 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.226 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.226 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.226 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.487 [122/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.487 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.487 [124/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.487 [125/264] Linking static target lib/librte_pci.a 00:02:24.487 [126/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.487 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.487 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.748 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.748 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.748 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.748 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:24.748 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.748 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.748 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.748 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.748 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.748 [138/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.748 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.748 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.748 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.748 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:25.009 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.009 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:25.009 [145/264] Linking static target lib/librte_cmdline.a 00:02:25.009 [146/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:25.009 [147/264] Linking static target lib/librte_timer.a 00:02:25.270 [148/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.270 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:25.270 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:25.270 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.270 [152/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:25.270 [153/264] Linking static target lib/librte_ethdev.a 00:02:25.532 [154/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:25.532 [155/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.532 [156/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.532 [157/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:25.532 [158/264] Linking static target lib/librte_compressdev.a 00:02:25.532 [159/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:25.793 [160/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:25.793 [161/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.793 [162/264] Linking static target lib/librte_dmadev.a 00:02:25.793 [163/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:25.793 [164/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.793 [165/264] Linking static target lib/librte_hash.a 00:02:26.054 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:26.054 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:26.054 [168/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:26.054 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:26.054 [170/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.316 [171/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.316 [172/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.316 [173/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:26.316 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:26.316 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:26.316 [176/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.316 [177/264] Linking static target lib/librte_cryptodev.a 00:02:26.316 [178/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:26.316 [179/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.577 [180/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:26.577 [181/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:26.577 [182/264] Linking static target lib/librte_power.a 00:02:26.577 [183/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:26.577 [184/264] Linking static target lib/librte_reorder.a 00:02:26.577 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:26.838 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:26.838 [187/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:26.838 [188/264] Linking static target lib/librte_security.a 00:02:26.838 [189/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:26.838 [190/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.099 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:27.099 [192/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.099 [193/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.099 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.099 [195/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:27.359 [196/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.359 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.359 [198/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:27.359 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:27.620 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:27.620 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:27.620 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:27.620 [203/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.620 [204/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:27.620 [205/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:27.620 [206/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:27.620 [207/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:27.620 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:27.881 [209/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.881 [210/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.881 [211/264] Linking static target drivers/librte_bus_vdev.a 00:02:27.881 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:27.881 [213/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.881 [214/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.881 [215/264] Linking static target drivers/librte_bus_pci.a 00:02:27.881 [216/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.881 [217/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.142 [218/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:28.142 [219/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.142 [220/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:28.142 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.142 [222/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.142 [223/264] Linking static target drivers/librte_mempool_ring.a 00:02:29.086 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.347 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.347 [226/264] Linking target lib/librte_eal.so.24.0 00:02:29.347 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:29.607 [228/264] Linking target lib/librte_dmadev.so.24.0 00:02:29.607 [229/264] Linking target lib/librte_pci.so.24.0 00:02:29.607 [230/264] Linking target lib/librte_ring.so.24.0 00:02:29.607 [231/264] Linking target lib/librte_meter.so.24.0 00:02:29.607 [232/264] Linking target lib/librte_timer.so.24.0 00:02:29.607 [233/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:29.607 [234/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:29.607 [235/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:29.607 [236/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:29.607 [237/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:29.607 [238/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:29.607 [239/264] Linking target lib/librte_mempool.so.24.0 00:02:29.607 [240/264] Linking target lib/librte_rcu.so.24.0 00:02:29.607 [241/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:29.607 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:29.607 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:29.867 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:29.867 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:29.867 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:29.867 [247/264] Linking target lib/librte_compressdev.so.24.0 00:02:29.867 [248/264] Linking target lib/librte_net.so.24.0 00:02:29.867 [249/264] Linking target lib/librte_reorder.so.24.0 00:02:29.867 [250/264] Linking target lib/librte_cryptodev.so.24.0 00:02:29.867 [251/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.128 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:30.128 [253/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:30.128 [254/264] Linking target lib/librte_cmdline.so.24.0 00:02:30.128 [255/264] Linking target lib/librte_hash.so.24.0 00:02:30.128 [256/264] Linking target lib/librte_security.so.24.0 00:02:30.128 [257/264] Linking target lib/librte_ethdev.so.24.0 00:02:30.128 [258/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:30.128 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:30.128 [260/264] Linking target lib/librte_power.so.24.0 00:02:32.671 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:32.671 [262/264] Linking static target lib/librte_vhost.a 00:02:34.056 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.056 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:34.056 INFO: autodetecting backend as ninja 00:02:34.056 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:34.629 CC lib/ut_mock/mock.o 00:02:34.629 CC lib/log/log_flags.o 00:02:34.629 CC lib/log/log_deprecated.o 00:02:34.629 CC lib/log/log.o 00:02:34.629 CC lib/ut/ut.o 00:02:34.890 LIB libspdk_ut_mock.a 00:02:34.891 LIB libspdk_log.a 00:02:34.891 SO libspdk_ut_mock.so.5.0 00:02:34.891 LIB libspdk_ut.a 00:02:34.891 SO libspdk_log.so.6.1 00:02:34.891 SO libspdk_ut.so.1.0 00:02:34.891 SYMLINK libspdk_ut_mock.so 00:02:34.891 SYMLINK libspdk_ut.so 00:02:34.891 SYMLINK libspdk_log.so 00:02:35.150 CC lib/ioat/ioat.o 00:02:35.150 CC lib/dma/dma.o 00:02:35.150 CXX lib/trace_parser/trace.o 00:02:35.150 CC lib/util/base64.o 00:02:35.150 CC lib/util/bit_array.o 00:02:35.150 CC lib/util/cpuset.o 00:02:35.151 CC lib/util/crc16.o 00:02:35.151 CC lib/util/crc32c.o 00:02:35.151 CC lib/util/crc32.o 00:02:35.151 CC lib/vfio_user/host/vfio_user_pci.o 00:02:35.151 CC lib/vfio_user/host/vfio_user.o 00:02:35.151 CC lib/util/crc32_ieee.o 00:02:35.151 CC lib/util/crc64.o 00:02:35.151 LIB libspdk_dma.a 00:02:35.151 CC lib/util/dif.o 00:02:35.151 SO libspdk_dma.so.3.0 00:02:35.416 CC lib/util/fd.o 00:02:35.416 CC lib/util/file.o 00:02:35.416 CC lib/util/hexlify.o 00:02:35.416 CC lib/util/iov.o 00:02:35.416 SYMLINK libspdk_dma.so 00:02:35.416 CC lib/util/math.o 00:02:35.416 LIB libspdk_ioat.a 00:02:35.416 CC lib/util/pipe.o 00:02:35.416 SO libspdk_ioat.so.6.0 00:02:35.416 LIB libspdk_vfio_user.a 00:02:35.416 CC lib/util/strerror_tls.o 00:02:35.416 CC lib/util/string.o 00:02:35.416 CC lib/util/uuid.o 00:02:35.416 SO libspdk_vfio_user.so.4.0 00:02:35.416 SYMLINK libspdk_ioat.so 00:02:35.416 CC lib/util/fd_group.o 00:02:35.416 CC lib/util/xor.o 00:02:35.416 SYMLINK libspdk_vfio_user.so 00:02:35.416 CC lib/util/zipf.o 00:02:35.990 LIB libspdk_util.a 00:02:35.990 SO libspdk_util.so.8.0 00:02:35.990 SYMLINK libspdk_util.so 00:02:35.990 LIB libspdk_trace_parser.a 00:02:35.990 SO libspdk_trace_parser.so.4.0 00:02:36.259 CC lib/env_dpdk/env.o 00:02:36.259 CC lib/vmd/vmd.o 00:02:36.259 CC lib/env_dpdk/memory.o 00:02:36.259 CC lib/idxd/idxd.o 00:02:36.259 CC lib/vmd/led.o 00:02:36.259 CC lib/env_dpdk/pci.o 00:02:36.259 CC lib/rdma/common.o 00:02:36.259 CC lib/json/json_parse.o 00:02:36.259 CC lib/conf/conf.o 00:02:36.259 SYMLINK libspdk_trace_parser.so 00:02:36.259 CC lib/json/json_util.o 00:02:36.259 CC lib/json/json_write.o 00:02:36.259 LIB libspdk_conf.a 00:02:36.259 CC lib/rdma/rdma_verbs.o 00:02:36.259 CC lib/idxd/idxd_user.o 00:02:36.259 SO libspdk_conf.so.5.0 00:02:36.520 CC lib/idxd/idxd_kernel.o 00:02:36.520 SYMLINK libspdk_conf.so 00:02:36.520 CC lib/env_dpdk/init.o 00:02:36.520 CC lib/env_dpdk/threads.o 00:02:36.520 CC lib/env_dpdk/pci_ioat.o 00:02:36.520 LIB libspdk_rdma.a 00:02:36.520 CC lib/env_dpdk/pci_virtio.o 00:02:36.520 LIB libspdk_json.a 00:02:36.520 SO libspdk_rdma.so.5.0 00:02:36.520 SO libspdk_json.so.5.1 00:02:36.520 CC lib/env_dpdk/pci_vmd.o 00:02:36.520 SYMLINK libspdk_rdma.so 00:02:36.520 CC lib/env_dpdk/pci_idxd.o 00:02:36.520 CC lib/env_dpdk/pci_event.o 00:02:36.520 SYMLINK libspdk_json.so 00:02:36.520 CC lib/env_dpdk/sigbus_handler.o 00:02:36.781 CC lib/env_dpdk/pci_dpdk.o 00:02:36.781 LIB libspdk_idxd.a 00:02:36.781 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.781 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.781 SO libspdk_idxd.so.11.0 00:02:36.781 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:36.781 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.781 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.781 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:36.781 SYMLINK libspdk_idxd.so 00:02:36.781 LIB libspdk_vmd.a 00:02:36.781 SO libspdk_vmd.so.5.0 00:02:36.781 SYMLINK libspdk_vmd.so 00:02:37.041 LIB libspdk_jsonrpc.a 00:02:37.041 SO libspdk_jsonrpc.so.5.1 00:02:37.041 SYMLINK libspdk_jsonrpc.so 00:02:37.303 CC lib/rpc/rpc.o 00:02:37.303 LIB libspdk_rpc.a 00:02:37.303 SO libspdk_rpc.so.5.0 00:02:37.564 SYMLINK libspdk_rpc.so 00:02:37.564 LIB libspdk_env_dpdk.a 00:02:37.564 CC lib/trace/trace.o 00:02:37.564 CC lib/trace/trace_rpc.o 00:02:37.564 CC lib/trace/trace_flags.o 00:02:37.564 CC lib/sock/sock.o 00:02:37.564 CC lib/sock/sock_rpc.o 00:02:37.564 CC lib/notify/notify.o 00:02:37.564 CC lib/notify/notify_rpc.o 00:02:37.564 SO libspdk_env_dpdk.so.13.0 00:02:37.564 LIB libspdk_notify.a 00:02:37.564 SYMLINK libspdk_env_dpdk.so 00:02:37.826 SO libspdk_notify.so.5.0 00:02:37.826 LIB libspdk_trace.a 00:02:37.826 SYMLINK libspdk_notify.so 00:02:37.826 SO libspdk_trace.so.9.0 00:02:37.826 SYMLINK libspdk_trace.so 00:02:37.826 LIB libspdk_sock.a 00:02:38.088 SO libspdk_sock.so.8.0 00:02:38.088 CC lib/thread/thread.o 00:02:38.088 CC lib/thread/iobuf.o 00:02:38.088 SYMLINK libspdk_sock.so 00:02:38.088 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:38.088 CC lib/nvme/nvme_ctrlr.o 00:02:38.088 CC lib/nvme/nvme_fabric.o 00:02:38.088 CC lib/nvme/nvme_ns_cmd.o 00:02:38.088 CC lib/nvme/nvme_qpair.o 00:02:38.088 CC lib/nvme/nvme_pcie_common.o 00:02:38.088 CC lib/nvme/nvme_pcie.o 00:02:38.088 CC lib/nvme/nvme_ns.o 00:02:38.349 CC lib/nvme/nvme.o 00:02:38.610 CC lib/nvme/nvme_quirks.o 00:02:38.610 CC lib/nvme/nvme_transport.o 00:02:38.872 CC lib/nvme/nvme_discovery.o 00:02:38.872 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.872 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.872 CC lib/nvme/nvme_tcp.o 00:02:39.133 CC lib/nvme/nvme_opal.o 00:02:39.133 CC lib/nvme/nvme_io_msg.o 00:02:39.133 CC lib/nvme/nvme_poll_group.o 00:02:39.395 CC lib/nvme/nvme_zns.o 00:02:39.395 CC lib/nvme/nvme_cuse.o 00:02:39.395 CC lib/nvme/nvme_vfio_user.o 00:02:39.395 CC lib/nvme/nvme_rdma.o 00:02:39.395 LIB libspdk_thread.a 00:02:39.395 SO libspdk_thread.so.9.0 00:02:39.656 SYMLINK libspdk_thread.so 00:02:39.656 CC lib/accel/accel.o 00:02:39.656 CC lib/virtio/virtio.o 00:02:39.656 CC lib/blob/blobstore.o 00:02:39.656 CC lib/init/json_config.o 00:02:39.917 CC lib/blob/request.o 00:02:39.917 CC lib/virtio/virtio_vhost_user.o 00:02:39.917 CC lib/init/subsystem.o 00:02:39.917 CC lib/virtio/virtio_vfio_user.o 00:02:39.917 CC lib/blob/zeroes.o 00:02:39.917 CC lib/init/subsystem_rpc.o 00:02:40.179 CC lib/init/rpc.o 00:02:40.179 CC lib/accel/accel_rpc.o 00:02:40.179 CC lib/accel/accel_sw.o 00:02:40.179 CC lib/virtio/virtio_pci.o 00:02:40.179 CC lib/blob/blob_bs_dev.o 00:02:40.179 LIB libspdk_init.a 00:02:40.179 SO libspdk_init.so.4.0 00:02:40.179 SYMLINK libspdk_init.so 00:02:40.442 LIB libspdk_virtio.a 00:02:40.442 CC lib/event/app.o 00:02:40.442 CC lib/event/log_rpc.o 00:02:40.442 CC lib/event/scheduler_static.o 00:02:40.442 CC lib/event/reactor.o 00:02:40.442 CC lib/event/app_rpc.o 00:02:40.442 SO libspdk_virtio.so.6.0 00:02:40.442 LIB libspdk_nvme.a 00:02:40.442 LIB libspdk_accel.a 00:02:40.442 SYMLINK libspdk_virtio.so 00:02:40.442 SO libspdk_accel.so.14.0 00:02:40.442 SO libspdk_nvme.so.12.0 00:02:40.442 SYMLINK libspdk_accel.so 00:02:40.703 CC lib/bdev/bdev.o 00:02:40.703 CC lib/bdev/bdev_zone.o 00:02:40.703 CC lib/bdev/part.o 00:02:40.703 CC lib/bdev/bdev_rpc.o 00:02:40.703 CC lib/bdev/scsi_nvme.o 00:02:40.703 LIB libspdk_event.a 00:02:40.703 SYMLINK libspdk_nvme.so 00:02:40.703 SO libspdk_event.so.12.0 00:02:40.964 SYMLINK libspdk_event.so 00:02:42.881 LIB libspdk_blob.a 00:02:42.881 SO libspdk_blob.so.10.1 00:02:42.881 SYMLINK libspdk_blob.so 00:02:42.881 CC lib/lvol/lvol.o 00:02:42.881 CC lib/blobfs/blobfs.o 00:02:42.881 CC lib/blobfs/tree.o 00:02:43.453 LIB libspdk_bdev.a 00:02:43.453 SO libspdk_bdev.so.14.0 00:02:43.718 SYMLINK libspdk_bdev.so 00:02:43.718 CC lib/nvmf/ctrlr.o 00:02:43.718 CC lib/ftl/ftl_core.o 00:02:43.718 CC lib/nvmf/ctrlr_discovery.o 00:02:43.718 CC lib/ftl/ftl_init.o 00:02:43.718 CC lib/nvmf/ctrlr_bdev.o 00:02:43.718 CC lib/ublk/ublk.o 00:02:43.718 CC lib/scsi/dev.o 00:02:43.718 CC lib/nbd/nbd.o 00:02:43.718 LIB libspdk_blobfs.a 00:02:43.718 SO libspdk_blobfs.so.9.0 00:02:43.986 SYMLINK libspdk_blobfs.so 00:02:43.986 CC lib/scsi/lun.o 00:02:43.986 LIB libspdk_lvol.a 00:02:43.986 SO libspdk_lvol.so.9.1 00:02:43.986 CC lib/nbd/nbd_rpc.o 00:02:43.986 CC lib/ublk/ublk_rpc.o 00:02:43.986 SYMLINK libspdk_lvol.so 00:02:43.986 CC lib/ftl/ftl_layout.o 00:02:43.986 CC lib/ftl/ftl_debug.o 00:02:43.986 CC lib/scsi/port.o 00:02:43.986 CC lib/scsi/scsi.o 00:02:43.986 CC lib/scsi/scsi_bdev.o 00:02:44.246 LIB libspdk_nbd.a 00:02:44.246 SO libspdk_nbd.so.6.0 00:02:44.246 CC lib/nvmf/subsystem.o 00:02:44.246 CC lib/nvmf/nvmf.o 00:02:44.246 CC lib/nvmf/nvmf_rpc.o 00:02:44.246 SYMLINK libspdk_nbd.so 00:02:44.246 CC lib/nvmf/transport.o 00:02:44.246 CC lib/ftl/ftl_io.o 00:02:44.246 CC lib/ftl/ftl_sb.o 00:02:44.246 LIB libspdk_ublk.a 00:02:44.504 SO libspdk_ublk.so.2.0 00:02:44.504 CC lib/nvmf/tcp.o 00:02:44.504 CC lib/ftl/ftl_l2p.o 00:02:44.504 CC lib/ftl/ftl_l2p_flat.o 00:02:44.504 SYMLINK libspdk_ublk.so 00:02:44.504 CC lib/ftl/ftl_nv_cache.o 00:02:44.504 CC lib/scsi/scsi_pr.o 00:02:44.504 CC lib/nvmf/rdma.o 00:02:44.504 CC lib/ftl/ftl_band.o 00:02:44.762 CC lib/ftl/ftl_band_ops.o 00:02:44.762 CC lib/ftl/ftl_writer.o 00:02:44.762 CC lib/scsi/scsi_rpc.o 00:02:45.022 CC lib/scsi/task.o 00:02:45.022 CC lib/ftl/ftl_rq.o 00:02:45.022 CC lib/ftl/ftl_reloc.o 00:02:45.022 CC lib/ftl/ftl_l2p_cache.o 00:02:45.022 CC lib/ftl/ftl_p2l.o 00:02:45.022 CC lib/ftl/mngt/ftl_mngt.o 00:02:45.022 LIB libspdk_scsi.a 00:02:45.022 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:45.283 SO libspdk_scsi.so.8.0 00:02:45.283 SYMLINK libspdk_scsi.so 00:02:45.283 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:45.283 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:45.283 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:45.283 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:45.283 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:45.283 CC lib/iscsi/conn.o 00:02:45.283 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:45.283 CC lib/vhost/vhost.o 00:02:45.543 CC lib/vhost/vhost_rpc.o 00:02:45.543 CC lib/vhost/vhost_scsi.o 00:02:45.543 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:45.543 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:45.543 CC lib/vhost/vhost_blk.o 00:02:45.543 CC lib/vhost/rte_vhost_user.o 00:02:45.543 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:45.804 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:45.804 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:45.804 CC lib/ftl/utils/ftl_conf.o 00:02:46.065 CC lib/ftl/utils/ftl_md.o 00:02:46.065 CC lib/ftl/utils/ftl_mempool.o 00:02:46.065 CC lib/iscsi/init_grp.o 00:02:46.065 CC lib/iscsi/iscsi.o 00:02:46.065 CC lib/iscsi/md5.o 00:02:46.065 CC lib/ftl/utils/ftl_bitmap.o 00:02:46.065 CC lib/ftl/utils/ftl_property.o 00:02:46.065 CC lib/iscsi/param.o 00:02:46.065 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:46.065 CC lib/iscsi/portal_grp.o 00:02:46.325 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:46.325 CC lib/iscsi/tgt_node.o 00:02:46.325 CC lib/iscsi/iscsi_subsystem.o 00:02:46.325 CC lib/iscsi/iscsi_rpc.o 00:02:46.325 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:46.325 CC lib/iscsi/task.o 00:02:46.325 LIB libspdk_nvmf.a 00:02:46.325 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:46.325 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:46.325 LIB libspdk_vhost.a 00:02:46.584 SO libspdk_nvmf.so.17.0 00:02:46.584 SO libspdk_vhost.so.7.1 00:02:46.584 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:46.584 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:46.584 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:46.584 SYMLINK libspdk_vhost.so 00:02:46.584 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:46.584 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:46.584 SYMLINK libspdk_nvmf.so 00:02:46.584 CC lib/ftl/base/ftl_base_dev.o 00:02:46.584 CC lib/ftl/base/ftl_base_bdev.o 00:02:46.584 CC lib/ftl/ftl_trace.o 00:02:46.845 LIB libspdk_ftl.a 00:02:46.845 SO libspdk_ftl.so.8.0 00:02:47.106 SYMLINK libspdk_ftl.so 00:02:47.367 LIB libspdk_iscsi.a 00:02:47.367 SO libspdk_iscsi.so.7.0 00:02:47.628 SYMLINK libspdk_iscsi.so 00:02:47.628 CC module/env_dpdk/env_dpdk_rpc.o 00:02:47.889 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:47.889 CC module/sock/posix/posix.o 00:02:47.889 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:47.889 CC module/accel/ioat/accel_ioat.o 00:02:47.889 CC module/scheduler/gscheduler/gscheduler.o 00:02:47.889 CC module/accel/error/accel_error.o 00:02:47.889 CC module/blob/bdev/blob_bdev.o 00:02:47.889 CC module/accel/iaa/accel_iaa.o 00:02:47.889 CC module/accel/dsa/accel_dsa.o 00:02:47.890 LIB libspdk_env_dpdk_rpc.a 00:02:47.890 SO libspdk_env_dpdk_rpc.so.5.0 00:02:47.890 LIB libspdk_scheduler_gscheduler.a 00:02:47.890 SYMLINK libspdk_env_dpdk_rpc.so 00:02:47.890 CC module/accel/dsa/accel_dsa_rpc.o 00:02:47.890 SO libspdk_scheduler_gscheduler.so.3.0 00:02:47.890 CC module/accel/error/accel_error_rpc.o 00:02:47.890 LIB libspdk_scheduler_dpdk_governor.a 00:02:47.890 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:47.890 SYMLINK libspdk_scheduler_gscheduler.so 00:02:47.890 CC module/accel/iaa/accel_iaa_rpc.o 00:02:47.890 LIB libspdk_scheduler_dynamic.a 00:02:47.890 CC module/accel/ioat/accel_ioat_rpc.o 00:02:47.890 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:47.890 SO libspdk_scheduler_dynamic.so.3.0 00:02:47.890 LIB libspdk_accel_dsa.a 00:02:47.890 SO libspdk_accel_dsa.so.4.0 00:02:47.890 SYMLINK libspdk_scheduler_dynamic.so 00:02:48.150 LIB libspdk_blob_bdev.a 00:02:48.150 LIB libspdk_accel_iaa.a 00:02:48.150 LIB libspdk_accel_error.a 00:02:48.150 SYMLINK libspdk_accel_dsa.so 00:02:48.150 LIB libspdk_accel_ioat.a 00:02:48.150 SO libspdk_blob_bdev.so.10.1 00:02:48.150 SO libspdk_accel_error.so.1.0 00:02:48.150 SO libspdk_accel_iaa.so.2.0 00:02:48.150 SO libspdk_accel_ioat.so.5.0 00:02:48.150 SYMLINK libspdk_accel_iaa.so 00:02:48.150 SYMLINK libspdk_blob_bdev.so 00:02:48.150 SYMLINK libspdk_accel_error.so 00:02:48.150 SYMLINK libspdk_accel_ioat.so 00:02:48.150 CC module/bdev/error/vbdev_error.o 00:02:48.150 CC module/bdev/gpt/gpt.o 00:02:48.150 CC module/bdev/passthru/vbdev_passthru.o 00:02:48.150 CC module/bdev/lvol/vbdev_lvol.o 00:02:48.150 CC module/bdev/nvme/bdev_nvme.o 00:02:48.150 CC module/bdev/delay/vbdev_delay.o 00:02:48.150 CC module/blobfs/bdev/blobfs_bdev.o 00:02:48.150 CC module/bdev/malloc/bdev_malloc.o 00:02:48.150 CC module/bdev/null/bdev_null.o 00:02:48.411 CC module/bdev/gpt/vbdev_gpt.o 00:02:48.411 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:48.411 CC module/bdev/error/vbdev_error_rpc.o 00:02:48.411 LIB libspdk_sock_posix.a 00:02:48.411 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:48.411 CC module/bdev/null/bdev_null_rpc.o 00:02:48.411 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:48.411 SO libspdk_sock_posix.so.5.0 00:02:48.673 LIB libspdk_blobfs_bdev.a 00:02:48.673 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:48.673 SYMLINK libspdk_sock_posix.so 00:02:48.673 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:48.673 SO libspdk_blobfs_bdev.so.5.0 00:02:48.673 LIB libspdk_bdev_delay.a 00:02:48.673 LIB libspdk_bdev_error.a 00:02:48.673 LIB libspdk_bdev_gpt.a 00:02:48.673 SO libspdk_bdev_delay.so.5.0 00:02:48.673 SO libspdk_bdev_error.so.5.0 00:02:48.673 LIB libspdk_bdev_null.a 00:02:48.673 SO libspdk_bdev_gpt.so.5.0 00:02:48.673 LIB libspdk_bdev_passthru.a 00:02:48.673 SYMLINK libspdk_blobfs_bdev.so 00:02:48.673 SO libspdk_bdev_null.so.5.0 00:02:48.673 SO libspdk_bdev_passthru.so.5.0 00:02:48.673 SYMLINK libspdk_bdev_error.so 00:02:48.673 SYMLINK libspdk_bdev_delay.so 00:02:48.673 SYMLINK libspdk_bdev_gpt.so 00:02:48.673 LIB libspdk_bdev_malloc.a 00:02:48.673 SYMLINK libspdk_bdev_null.so 00:02:48.673 SYMLINK libspdk_bdev_passthru.so 00:02:48.673 SO libspdk_bdev_malloc.so.5.0 00:02:48.673 CC module/bdev/split/vbdev_split.o 00:02:48.673 CC module/bdev/raid/bdev_raid.o 00:02:48.673 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:48.673 CC module/bdev/xnvme/bdev_xnvme.o 00:02:48.934 CC module/bdev/aio/bdev_aio.o 00:02:48.934 SYMLINK libspdk_bdev_malloc.so 00:02:48.934 CC module/bdev/aio/bdev_aio_rpc.o 00:02:48.934 CC module/bdev/ftl/bdev_ftl.o 00:02:48.934 CC module/bdev/iscsi/bdev_iscsi.o 00:02:48.934 LIB libspdk_bdev_lvol.a 00:02:48.934 SO libspdk_bdev_lvol.so.5.0 00:02:48.934 CC module/bdev/split/vbdev_split_rpc.o 00:02:48.934 SYMLINK libspdk_bdev_lvol.so 00:02:48.934 CC module/bdev/raid/bdev_raid_rpc.o 00:02:48.934 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:48.934 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:48.934 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.196 LIB libspdk_bdev_split.a 00:02:49.196 SO libspdk_bdev_split.so.5.0 00:02:49.196 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.196 LIB libspdk_bdev_aio.a 00:02:49.196 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.196 SO libspdk_bdev_aio.so.5.0 00:02:49.196 SYMLINK libspdk_bdev_split.so 00:02:49.196 LIB libspdk_bdev_zone_block.a 00:02:49.196 CC module/bdev/nvme/nvme_rpc.o 00:02:49.196 LIB libspdk_bdev_xnvme.a 00:02:49.196 LIB libspdk_bdev_ftl.a 00:02:49.196 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.196 SO libspdk_bdev_zone_block.so.5.0 00:02:49.196 SYMLINK libspdk_bdev_aio.so 00:02:49.196 SO libspdk_bdev_xnvme.so.2.0 00:02:49.196 CC module/bdev/nvme/vbdev_opal.o 00:02:49.196 SO libspdk_bdev_ftl.so.5.0 00:02:49.196 LIB libspdk_bdev_iscsi.a 00:02:49.196 SYMLINK libspdk_bdev_zone_block.so 00:02:49.196 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.196 SYMLINK libspdk_bdev_xnvme.so 00:02:49.196 SO libspdk_bdev_iscsi.so.5.0 00:02:49.196 SYMLINK libspdk_bdev_ftl.so 00:02:49.196 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.196 CC module/bdev/raid/raid0.o 00:02:49.196 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.196 SYMLINK libspdk_bdev_iscsi.so 00:02:49.196 CC module/bdev/raid/raid1.o 00:02:49.458 CC module/bdev/raid/concat.o 00:02:49.458 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.458 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.458 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:49.718 LIB libspdk_bdev_raid.a 00:02:49.718 SO libspdk_bdev_raid.so.5.0 00:02:49.718 SYMLINK libspdk_bdev_raid.so 00:02:49.979 LIB libspdk_bdev_virtio.a 00:02:49.979 SO libspdk_bdev_virtio.so.5.0 00:02:49.979 SYMLINK libspdk_bdev_virtio.so 00:02:50.240 LIB libspdk_bdev_nvme.a 00:02:50.240 SO libspdk_bdev_nvme.so.6.0 00:02:50.499 SYMLINK libspdk_bdev_nvme.so 00:02:50.759 CC module/event/subsystems/iobuf/iobuf.o 00:02:50.759 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:50.759 CC module/event/subsystems/vmd/vmd.o 00:02:50.759 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:50.759 CC module/event/subsystems/scheduler/scheduler.o 00:02:50.759 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:50.759 CC module/event/subsystems/sock/sock.o 00:02:50.759 LIB libspdk_event_sock.a 00:02:50.759 LIB libspdk_event_scheduler.a 00:02:50.759 LIB libspdk_event_vhost_blk.a 00:02:50.759 LIB libspdk_event_iobuf.a 00:02:50.759 LIB libspdk_event_vmd.a 00:02:50.759 SO libspdk_event_sock.so.4.0 00:02:50.759 SO libspdk_event_scheduler.so.3.0 00:02:50.759 SO libspdk_event_vhost_blk.so.2.0 00:02:50.759 SO libspdk_event_iobuf.so.2.0 00:02:50.759 SO libspdk_event_vmd.so.5.0 00:02:50.759 SYMLINK libspdk_event_sock.so 00:02:50.759 SYMLINK libspdk_event_scheduler.so 00:02:50.759 SYMLINK libspdk_event_vhost_blk.so 00:02:50.759 SYMLINK libspdk_event_iobuf.so 00:02:50.759 SYMLINK libspdk_event_vmd.so 00:02:51.020 CC module/event/subsystems/accel/accel.o 00:02:51.020 LIB libspdk_event_accel.a 00:02:51.020 SO libspdk_event_accel.so.5.0 00:02:51.281 SYMLINK libspdk_event_accel.so 00:02:51.281 CC module/event/subsystems/bdev/bdev.o 00:02:51.542 LIB libspdk_event_bdev.a 00:02:51.542 SO libspdk_event_bdev.so.5.0 00:02:51.542 SYMLINK libspdk_event_bdev.so 00:02:51.542 CC module/event/subsystems/scsi/scsi.o 00:02:51.542 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:51.542 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:51.542 CC module/event/subsystems/nbd/nbd.o 00:02:51.542 CC module/event/subsystems/ublk/ublk.o 00:02:51.802 LIB libspdk_event_scsi.a 00:02:51.802 LIB libspdk_event_nbd.a 00:02:51.802 LIB libspdk_event_ublk.a 00:02:51.802 SO libspdk_event_scsi.so.5.0 00:02:51.802 SO libspdk_event_nbd.so.5.0 00:02:51.802 SO libspdk_event_ublk.so.2.0 00:02:51.802 SYMLINK libspdk_event_nbd.so 00:02:51.802 SYMLINK libspdk_event_scsi.so 00:02:51.802 LIB libspdk_event_nvmf.a 00:02:51.802 SYMLINK libspdk_event_ublk.so 00:02:51.802 SO libspdk_event_nvmf.so.5.0 00:02:51.802 SYMLINK libspdk_event_nvmf.so 00:02:52.063 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:52.063 CC module/event/subsystems/iscsi/iscsi.o 00:02:52.063 LIB libspdk_event_vhost_scsi.a 00:02:52.063 LIB libspdk_event_iscsi.a 00:02:52.063 SO libspdk_event_vhost_scsi.so.2.0 00:02:52.063 SO libspdk_event_iscsi.so.5.0 00:02:52.063 SYMLINK libspdk_event_vhost_scsi.so 00:02:52.063 SYMLINK libspdk_event_iscsi.so 00:02:52.324 SO libspdk.so.5.0 00:02:52.324 SYMLINK libspdk.so 00:02:52.324 CXX app/trace/trace.o 00:02:52.324 CC examples/vmd/lsvmd/lsvmd.o 00:02:52.324 CC examples/nvme/hello_world/hello_world.o 00:02:52.324 CC examples/sock/hello_world/hello_sock.o 00:02:52.324 CC examples/ioat/perf/perf.o 00:02:52.324 CC examples/accel/perf/accel_perf.o 00:02:52.324 CC examples/blob/hello_world/hello_blob.o 00:02:52.324 CC examples/bdev/hello_world/hello_bdev.o 00:02:52.324 CC test/accel/dif/dif.o 00:02:52.584 CC examples/nvmf/nvmf/nvmf.o 00:02:52.584 LINK lsvmd 00:02:52.584 LINK hello_world 00:02:52.584 LINK ioat_perf 00:02:52.584 LINK hello_bdev 00:02:52.584 LINK hello_sock 00:02:52.584 LINK hello_blob 00:02:52.584 LINK spdk_trace 00:02:52.845 CC examples/vmd/led/led.o 00:02:52.845 LINK dif 00:02:52.845 LINK nvmf 00:02:52.845 CC examples/nvme/reconnect/reconnect.o 00:02:52.845 CC examples/ioat/verify/verify.o 00:02:52.845 LINK accel_perf 00:02:52.845 CC app/trace_record/trace_record.o 00:02:52.845 LINK led 00:02:52.845 CC examples/bdev/bdevperf/bdevperf.o 00:02:52.845 CC examples/util/zipf/zipf.o 00:02:52.845 CC examples/blob/cli/blobcli.o 00:02:53.106 CC test/app/bdev_svc/bdev_svc.o 00:02:53.106 LINK verify 00:02:53.106 CC test/app/histogram_perf/histogram_perf.o 00:02:53.106 CC test/app/jsoncat/jsoncat.o 00:02:53.106 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:53.106 LINK zipf 00:02:53.106 LINK spdk_trace_record 00:02:53.106 LINK reconnect 00:02:53.106 LINK jsoncat 00:02:53.106 LINK bdev_svc 00:02:53.106 LINK histogram_perf 00:02:53.106 CC test/app/stub/stub.o 00:02:53.367 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:53.367 CC app/nvmf_tgt/nvmf_main.o 00:02:53.368 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:53.368 CC app/iscsi_tgt/iscsi_tgt.o 00:02:53.368 LINK stub 00:02:53.368 CC app/spdk_tgt/spdk_tgt.o 00:02:53.368 LINK blobcli 00:02:53.368 CC examples/thread/thread/thread_ex.o 00:02:53.368 LINK nvme_fuzz 00:02:53.368 LINK nvmf_tgt 00:02:53.368 LINK iscsi_tgt 00:02:53.368 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:53.629 LINK spdk_tgt 00:02:53.629 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:53.629 CC examples/nvme/arbitration/arbitration.o 00:02:53.629 LINK bdevperf 00:02:53.629 LINK thread 00:02:53.629 CC app/spdk_lspci/spdk_lspci.o 00:02:53.629 CC app/spdk_nvme_perf/perf.o 00:02:53.629 CC app/spdk_nvme_identify/identify.o 00:02:53.891 LINK spdk_lspci 00:02:53.891 CC test/bdev/bdevio/bdevio.o 00:02:53.891 LINK nvme_manage 00:02:53.891 CC examples/idxd/perf/perf.o 00:02:53.891 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:53.891 LINK arbitration 00:02:53.891 LINK vhost_fuzz 00:02:53.891 CC app/spdk_nvme_discover/discovery_aer.o 00:02:53.891 CC examples/nvme/hotplug/hotplug.o 00:02:54.152 LINK interrupt_tgt 00:02:54.152 LINK bdevio 00:02:54.152 TEST_HEADER include/spdk/accel.h 00:02:54.152 TEST_HEADER include/spdk/accel_module.h 00:02:54.152 TEST_HEADER include/spdk/assert.h 00:02:54.152 TEST_HEADER include/spdk/barrier.h 00:02:54.152 TEST_HEADER include/spdk/base64.h 00:02:54.152 TEST_HEADER include/spdk/bdev.h 00:02:54.152 TEST_HEADER include/spdk/bdev_module.h 00:02:54.152 TEST_HEADER include/spdk/bdev_zone.h 00:02:54.152 TEST_HEADER include/spdk/bit_array.h 00:02:54.152 TEST_HEADER include/spdk/bit_pool.h 00:02:54.152 TEST_HEADER include/spdk/blob_bdev.h 00:02:54.152 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:54.152 TEST_HEADER include/spdk/blobfs.h 00:02:54.152 TEST_HEADER include/spdk/blob.h 00:02:54.152 LINK spdk_nvme_discover 00:02:54.152 TEST_HEADER include/spdk/conf.h 00:02:54.152 TEST_HEADER include/spdk/config.h 00:02:54.152 TEST_HEADER include/spdk/cpuset.h 00:02:54.152 TEST_HEADER include/spdk/crc16.h 00:02:54.152 TEST_HEADER include/spdk/crc32.h 00:02:54.152 TEST_HEADER include/spdk/crc64.h 00:02:54.152 TEST_HEADER include/spdk/dif.h 00:02:54.152 TEST_HEADER include/spdk/dma.h 00:02:54.152 TEST_HEADER include/spdk/endian.h 00:02:54.152 TEST_HEADER include/spdk/env_dpdk.h 00:02:54.152 TEST_HEADER include/spdk/env.h 00:02:54.152 TEST_HEADER include/spdk/event.h 00:02:54.152 TEST_HEADER include/spdk/fd_group.h 00:02:54.152 TEST_HEADER include/spdk/fd.h 00:02:54.152 TEST_HEADER include/spdk/file.h 00:02:54.152 TEST_HEADER include/spdk/ftl.h 00:02:54.152 TEST_HEADER include/spdk/gpt_spec.h 00:02:54.152 TEST_HEADER include/spdk/hexlify.h 00:02:54.152 TEST_HEADER include/spdk/histogram_data.h 00:02:54.152 TEST_HEADER include/spdk/idxd.h 00:02:54.152 TEST_HEADER include/spdk/idxd_spec.h 00:02:54.152 TEST_HEADER include/spdk/init.h 00:02:54.152 TEST_HEADER include/spdk/ioat.h 00:02:54.152 CC test/blobfs/mkfs/mkfs.o 00:02:54.152 TEST_HEADER include/spdk/ioat_spec.h 00:02:54.152 TEST_HEADER include/spdk/iscsi_spec.h 00:02:54.152 TEST_HEADER include/spdk/json.h 00:02:54.152 TEST_HEADER include/spdk/jsonrpc.h 00:02:54.152 TEST_HEADER include/spdk/likely.h 00:02:54.152 TEST_HEADER include/spdk/log.h 00:02:54.152 LINK hotplug 00:02:54.152 TEST_HEADER include/spdk/lvol.h 00:02:54.152 TEST_HEADER include/spdk/memory.h 00:02:54.152 TEST_HEADER include/spdk/mmio.h 00:02:54.152 TEST_HEADER include/spdk/nbd.h 00:02:54.152 LINK idxd_perf 00:02:54.152 TEST_HEADER include/spdk/notify.h 00:02:54.152 TEST_HEADER include/spdk/nvme.h 00:02:54.152 TEST_HEADER include/spdk/nvme_intel.h 00:02:54.152 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:54.152 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:54.152 TEST_HEADER include/spdk/nvme_spec.h 00:02:54.152 TEST_HEADER include/spdk/nvme_zns.h 00:02:54.152 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:54.152 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:54.152 TEST_HEADER include/spdk/nvmf.h 00:02:54.152 TEST_HEADER include/spdk/nvmf_spec.h 00:02:54.152 TEST_HEADER include/spdk/nvmf_transport.h 00:02:54.152 TEST_HEADER include/spdk/opal.h 00:02:54.152 TEST_HEADER include/spdk/opal_spec.h 00:02:54.152 TEST_HEADER include/spdk/pci_ids.h 00:02:54.152 TEST_HEADER include/spdk/pipe.h 00:02:54.152 TEST_HEADER include/spdk/queue.h 00:02:54.152 TEST_HEADER include/spdk/reduce.h 00:02:54.152 TEST_HEADER include/spdk/rpc.h 00:02:54.152 TEST_HEADER include/spdk/scheduler.h 00:02:54.152 TEST_HEADER include/spdk/scsi.h 00:02:54.152 TEST_HEADER include/spdk/scsi_spec.h 00:02:54.152 TEST_HEADER include/spdk/sock.h 00:02:54.152 TEST_HEADER include/spdk/stdinc.h 00:02:54.152 TEST_HEADER include/spdk/string.h 00:02:54.152 TEST_HEADER include/spdk/thread.h 00:02:54.152 TEST_HEADER include/spdk/trace.h 00:02:54.152 TEST_HEADER include/spdk/trace_parser.h 00:02:54.152 TEST_HEADER include/spdk/tree.h 00:02:54.152 TEST_HEADER include/spdk/ublk.h 00:02:54.152 TEST_HEADER include/spdk/util.h 00:02:54.152 TEST_HEADER include/spdk/uuid.h 00:02:54.152 TEST_HEADER include/spdk/version.h 00:02:54.152 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:54.152 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:54.152 TEST_HEADER include/spdk/vhost.h 00:02:54.152 TEST_HEADER include/spdk/vmd.h 00:02:54.152 TEST_HEADER include/spdk/xor.h 00:02:54.152 TEST_HEADER include/spdk/zipf.h 00:02:54.152 CXX test/cpp_headers/accel.o 00:02:54.412 CC test/dma/test_dma/test_dma.o 00:02:54.412 CC app/spdk_top/spdk_top.o 00:02:54.412 LINK mkfs 00:02:54.412 CXX test/cpp_headers/accel_module.o 00:02:54.412 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:54.412 CC test/env/mem_callbacks/mem_callbacks.o 00:02:54.412 CC test/env/vtophys/vtophys.o 00:02:54.412 CXX test/cpp_headers/assert.o 00:02:54.412 LINK spdk_nvme_identify 00:02:54.412 LINK spdk_nvme_perf 00:02:54.412 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:54.412 LINK cmb_copy 00:02:54.673 LINK vtophys 00:02:54.673 CXX test/cpp_headers/barrier.o 00:02:54.673 LINK test_dma 00:02:54.673 CXX test/cpp_headers/base64.o 00:02:54.673 LINK env_dpdk_post_init 00:02:54.673 CC examples/nvme/abort/abort.o 00:02:54.673 CC test/env/memory/memory_ut.o 00:02:54.673 CC test/event/event_perf/event_perf.o 00:02:54.673 CXX test/cpp_headers/bdev.o 00:02:54.934 CC test/lvol/esnap/esnap.o 00:02:54.934 LINK event_perf 00:02:54.934 LINK mem_callbacks 00:02:54.934 CC app/vhost/vhost.o 00:02:54.934 CC test/nvme/aer/aer.o 00:02:54.934 LINK iscsi_fuzz 00:02:54.934 CXX test/cpp_headers/bdev_module.o 00:02:54.934 CXX test/cpp_headers/bdev_zone.o 00:02:54.934 CC test/event/reactor/reactor.o 00:02:55.195 LINK vhost 00:02:55.195 LINK abort 00:02:55.195 CXX test/cpp_headers/bit_array.o 00:02:55.195 CC test/rpc_client/rpc_client_test.o 00:02:55.195 LINK reactor 00:02:55.195 LINK aer 00:02:55.195 LINK spdk_top 00:02:55.195 CC app/spdk_dd/spdk_dd.o 00:02:55.195 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:55.195 CXX test/cpp_headers/bit_pool.o 00:02:55.195 CXX test/cpp_headers/blob_bdev.o 00:02:55.195 CC test/nvme/reset/reset.o 00:02:55.195 LINK rpc_client_test 00:02:55.195 CXX test/cpp_headers/blobfs_bdev.o 00:02:55.195 CC test/event/reactor_perf/reactor_perf.o 00:02:55.456 LINK pmr_persistence 00:02:55.456 CXX test/cpp_headers/blobfs.o 00:02:55.456 LINK reactor_perf 00:02:55.456 CXX test/cpp_headers/blob.o 00:02:55.456 CC test/thread/poller_perf/poller_perf.o 00:02:55.456 LINK reset 00:02:55.456 LINK memory_ut 00:02:55.456 CXX test/cpp_headers/conf.o 00:02:55.456 CC app/fio/nvme/fio_plugin.o 00:02:55.456 LINK spdk_dd 00:02:55.456 CC test/env/pci/pci_ut.o 00:02:55.717 CC test/event/app_repeat/app_repeat.o 00:02:55.717 LINK poller_perf 00:02:55.717 CC test/nvme/sgl/sgl.o 00:02:55.717 CXX test/cpp_headers/config.o 00:02:55.717 CC app/fio/bdev/fio_plugin.o 00:02:55.717 CXX test/cpp_headers/cpuset.o 00:02:55.717 CC test/nvme/e2edp/nvme_dp.o 00:02:55.717 CXX test/cpp_headers/crc16.o 00:02:55.717 LINK app_repeat 00:02:55.717 CC test/event/scheduler/scheduler.o 00:02:55.717 CC test/nvme/overhead/overhead.o 00:02:55.717 CXX test/cpp_headers/crc32.o 00:02:55.978 LINK sgl 00:02:55.978 LINK scheduler 00:02:55.978 CC test/nvme/err_injection/err_injection.o 00:02:55.978 LINK nvme_dp 00:02:55.978 CXX test/cpp_headers/crc64.o 00:02:55.978 LINK pci_ut 00:02:55.978 LINK spdk_bdev 00:02:55.978 LINK spdk_nvme 00:02:55.978 CC test/nvme/startup/startup.o 00:02:55.978 CC test/nvme/reserve/reserve.o 00:02:55.978 LINK err_injection 00:02:55.978 CXX test/cpp_headers/dif.o 00:02:56.240 LINK overhead 00:02:56.240 CC test/nvme/simple_copy/simple_copy.o 00:02:56.240 CXX test/cpp_headers/dma.o 00:02:56.240 CC test/nvme/connect_stress/connect_stress.o 00:02:56.240 CC test/nvme/boot_partition/boot_partition.o 00:02:56.240 LINK startup 00:02:56.240 CXX test/cpp_headers/endian.o 00:02:56.240 CC test/nvme/compliance/nvme_compliance.o 00:02:56.240 CXX test/cpp_headers/env_dpdk.o 00:02:56.240 LINK reserve 00:02:56.240 LINK connect_stress 00:02:56.240 CC test/nvme/fused_ordering/fused_ordering.o 00:02:56.240 LINK simple_copy 00:02:56.240 LINK boot_partition 00:02:56.240 CXX test/cpp_headers/env.o 00:02:56.501 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:56.501 CXX test/cpp_headers/event.o 00:02:56.501 CC test/nvme/fdp/fdp.o 00:02:56.501 CC test/nvme/cuse/cuse.o 00:02:56.501 CXX test/cpp_headers/fd_group.o 00:02:56.501 LINK fused_ordering 00:02:56.501 CXX test/cpp_headers/fd.o 00:02:56.501 CXX test/cpp_headers/file.o 00:02:56.501 CXX test/cpp_headers/ftl.o 00:02:56.501 LINK doorbell_aers 00:02:56.501 LINK nvme_compliance 00:02:56.501 CXX test/cpp_headers/gpt_spec.o 00:02:56.502 CXX test/cpp_headers/hexlify.o 00:02:56.502 CXX test/cpp_headers/histogram_data.o 00:02:56.502 CXX test/cpp_headers/idxd.o 00:02:56.762 CXX test/cpp_headers/idxd_spec.o 00:02:56.762 CXX test/cpp_headers/init.o 00:02:56.762 CXX test/cpp_headers/ioat.o 00:02:56.762 LINK fdp 00:02:56.762 CXX test/cpp_headers/ioat_spec.o 00:02:56.762 CXX test/cpp_headers/iscsi_spec.o 00:02:56.762 CXX test/cpp_headers/json.o 00:02:56.762 CXX test/cpp_headers/jsonrpc.o 00:02:56.762 CXX test/cpp_headers/likely.o 00:02:56.762 CXX test/cpp_headers/log.o 00:02:56.762 CXX test/cpp_headers/lvol.o 00:02:56.762 CXX test/cpp_headers/memory.o 00:02:56.762 CXX test/cpp_headers/mmio.o 00:02:57.024 CXX test/cpp_headers/nbd.o 00:02:57.024 CXX test/cpp_headers/notify.o 00:02:57.024 CXX test/cpp_headers/nvme.o 00:02:57.024 CXX test/cpp_headers/nvme_intel.o 00:02:57.024 CXX test/cpp_headers/nvme_ocssd.o 00:02:57.024 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:57.024 CXX test/cpp_headers/nvme_spec.o 00:02:57.024 CXX test/cpp_headers/nvme_zns.o 00:02:57.024 CXX test/cpp_headers/nvmf_cmd.o 00:02:57.024 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:57.024 CXX test/cpp_headers/nvmf.o 00:02:57.024 CXX test/cpp_headers/nvmf_spec.o 00:02:57.024 CXX test/cpp_headers/nvmf_transport.o 00:02:57.024 CXX test/cpp_headers/opal.o 00:02:57.024 CXX test/cpp_headers/opal_spec.o 00:02:57.024 CXX test/cpp_headers/pci_ids.o 00:02:57.285 CXX test/cpp_headers/pipe.o 00:02:57.285 CXX test/cpp_headers/queue.o 00:02:57.285 CXX test/cpp_headers/reduce.o 00:02:57.285 CXX test/cpp_headers/rpc.o 00:02:57.285 CXX test/cpp_headers/scheduler.o 00:02:57.285 LINK cuse 00:02:57.285 CXX test/cpp_headers/scsi.o 00:02:57.285 CXX test/cpp_headers/scsi_spec.o 00:02:57.285 CXX test/cpp_headers/sock.o 00:02:57.285 CXX test/cpp_headers/stdinc.o 00:02:57.285 CXX test/cpp_headers/string.o 00:02:57.285 CXX test/cpp_headers/thread.o 00:02:57.285 CXX test/cpp_headers/trace.o 00:02:57.285 CXX test/cpp_headers/trace_parser.o 00:02:57.285 CXX test/cpp_headers/tree.o 00:02:57.285 CXX test/cpp_headers/ublk.o 00:02:57.285 CXX test/cpp_headers/util.o 00:02:57.285 CXX test/cpp_headers/uuid.o 00:02:57.285 CXX test/cpp_headers/version.o 00:02:57.285 CXX test/cpp_headers/vfio_user_pci.o 00:02:57.546 CXX test/cpp_headers/vfio_user_spec.o 00:02:57.546 CXX test/cpp_headers/vhost.o 00:02:57.546 CXX test/cpp_headers/vmd.o 00:02:57.546 CXX test/cpp_headers/xor.o 00:02:57.546 CXX test/cpp_headers/zipf.o 00:02:58.931 LINK esnap 00:02:59.192 00:02:59.192 real 0m50.292s 00:02:59.192 user 4m51.049s 00:02:59.192 sys 0m58.445s 00:02:59.192 ************************************ 00:02:59.192 END TEST make 00:02:59.192 20:10:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:59.192 20:10:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.192 ************************************ 00:02:59.454 20:10:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:59.454 20:10:14 -- nvmf/common.sh@7 -- # uname -s 00:02:59.454 20:10:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:59.454 20:10:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:59.454 20:10:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:59.454 20:10:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:59.454 20:10:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:59.454 20:10:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:59.454 20:10:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:59.454 20:10:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:59.454 20:10:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:59.454 20:10:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:59.454 20:10:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b92e1fe-a063-41e2-8301-3ad34ae218a8 00:02:59.454 20:10:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=7b92e1fe-a063-41e2-8301-3ad34ae218a8 00:02:59.454 20:10:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:59.454 20:10:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:59.454 20:10:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:59.454 20:10:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:59.454 20:10:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:59.454 20:10:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.454 20:10:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.454 20:10:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.454 20:10:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.454 20:10:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.454 20:10:14 -- paths/export.sh@5 -- # export PATH 00:02:59.454 20:10:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.454 20:10:14 -- nvmf/common.sh@46 -- # : 0 00:02:59.454 20:10:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:59.454 20:10:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:59.454 20:10:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:59.454 20:10:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:59.454 20:10:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:59.454 20:10:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:59.454 20:10:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:59.454 20:10:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:59.454 20:10:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:59.454 20:10:14 -- spdk/autotest.sh@32 -- # uname -s 00:02:59.454 20:10:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:59.454 20:10:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:59.454 20:10:14 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:59.454 20:10:14 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:59.454 20:10:14 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:59.454 20:10:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:59.454 20:10:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:59.454 20:10:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:59.454 20:10:14 -- spdk/autotest.sh@48 -- # udevadm_pid=48153 00:02:59.454 20:10:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:59.454 20:10:14 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:59.454 20:10:14 -- spdk/autotest.sh@54 -- # echo 48155 00:02:59.454 20:10:14 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:59.454 20:10:14 -- spdk/autotest.sh@56 -- # echo 48156 00:02:59.454 20:10:14 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:59.454 20:10:14 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:59.454 20:10:14 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:59.454 20:10:14 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:59.454 20:10:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:59.454 20:10:14 -- common/autotest_common.sh@10 -- # set +x 00:02:59.454 20:10:14 -- spdk/autotest.sh@70 -- # create_test_list 00:02:59.454 20:10:14 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:59.454 20:10:14 -- common/autotest_common.sh@10 -- # set +x 00:02:59.454 20:10:14 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:59.454 20:10:14 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:59.454 20:10:14 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:59.454 20:10:14 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:59.454 20:10:14 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:59.454 20:10:14 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:59.454 20:10:14 -- common/autotest_common.sh@1440 -- # uname 00:02:59.454 20:10:14 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:59.454 20:10:14 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:59.454 20:10:14 -- common/autotest_common.sh@1460 -- # uname 00:02:59.454 20:10:14 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:59.454 20:10:14 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:59.454 20:10:14 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:59.454 20:10:14 -- spdk/autotest.sh@83 -- # hash lcov 00:02:59.454 20:10:14 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:59.454 20:10:14 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:59.454 --rc lcov_branch_coverage=1 00:02:59.454 --rc lcov_function_coverage=1 00:02:59.454 --rc genhtml_branch_coverage=1 00:02:59.454 --rc genhtml_function_coverage=1 00:02:59.454 --rc genhtml_legend=1 00:02:59.454 --rc geninfo_all_blocks=1 00:02:59.454 ' 00:02:59.454 20:10:14 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:59.454 --rc lcov_branch_coverage=1 00:02:59.454 --rc lcov_function_coverage=1 00:02:59.454 --rc genhtml_branch_coverage=1 00:02:59.454 --rc genhtml_function_coverage=1 00:02:59.454 --rc genhtml_legend=1 00:02:59.454 --rc geninfo_all_blocks=1 00:02:59.454 ' 00:02:59.454 20:10:14 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:59.454 --rc lcov_branch_coverage=1 00:02:59.454 --rc lcov_function_coverage=1 00:02:59.454 --rc genhtml_branch_coverage=1 00:02:59.454 --rc genhtml_function_coverage=1 00:02:59.454 --rc genhtml_legend=1 00:02:59.454 --rc geninfo_all_blocks=1 00:02:59.454 --no-external' 00:02:59.454 20:10:14 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:59.454 --rc lcov_branch_coverage=1 00:02:59.454 --rc lcov_function_coverage=1 00:02:59.454 --rc genhtml_branch_coverage=1 00:02:59.454 --rc genhtml_function_coverage=1 00:02:59.454 --rc genhtml_legend=1 00:02:59.454 --rc geninfo_all_blocks=1 00:02:59.454 --no-external' 00:02:59.454 20:10:14 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:59.454 lcov: LCOV version 1.15 00:02:59.454 20:10:14 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:06.130 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:06.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:06.130 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:06.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:06.130 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:06.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:24.241 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:24.241 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:24.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:24.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:24.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:24.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:25.184 20:10:40 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:25.184 20:10:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:25.184 20:10:40 -- common/autotest_common.sh@10 -- # set +x 00:03:25.184 20:10:40 -- spdk/autotest.sh@102 -- # rm -f 00:03:25.184 20:10:40 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:26.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:26.126 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:26.126 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:26.126 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:26.126 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:26.126 20:10:40 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:26.126 20:10:40 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:26.126 20:10:40 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:26.126 20:10:40 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:26.126 20:10:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.126 20:10:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:26.126 20:10:40 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:26.126 20:10:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.126 20:10:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.126 20:10:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.126 20:10:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:26.126 20:10:40 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:26.126 20:10:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:26.126 20:10:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.126 20:10:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.126 20:10:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:03:26.126 20:10:40 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:03:26.126 20:10:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:26.126 20:10:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.126 20:10:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.126 20:10:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:03:26.126 20:10:40 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:03:26.126 20:10:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:26.127 20:10:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.127 20:10:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.127 20:10:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:03:26.127 20:10:40 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:03:26.127 20:10:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:26.127 20:10:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.127 20:10:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.127 20:10:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:03:26.127 20:10:40 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:03:26.127 20:10:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:26.127 20:10:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.127 20:10:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.127 20:10:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:03:26.127 20:10:40 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:03:26.127 20:10:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:26.127 20:10:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.127 20:10:40 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:26.127 20:10:40 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:26.127 20:10:40 -- spdk/autotest.sh@121 -- # grep -v p 00:03:26.127 20:10:40 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.127 20:10:40 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:26.127 20:10:40 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:26.127 20:10:40 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:26.127 20:10:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:26.127 No valid GPT data, bailing 00:03:26.127 20:10:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:26.127 20:10:41 -- scripts/common.sh@393 -- # pt= 00:03:26.127 20:10:41 -- scripts/common.sh@394 -- # return 1 00:03:26.127 20:10:41 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:26.127 1+0 records in 00:03:26.127 1+0 records out 00:03:26.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118955 s, 88.1 MB/s 00:03:26.127 20:10:41 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.127 20:10:41 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:26.127 20:10:41 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:03:26.127 20:10:41 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:26.127 20:10:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:26.388 No valid GPT data, bailing 00:03:26.388 20:10:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:26.388 20:10:41 -- scripts/common.sh@393 -- # pt= 00:03:26.388 20:10:41 -- scripts/common.sh@394 -- # return 1 00:03:26.388 20:10:41 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:26.388 1+0 records in 00:03:26.388 1+0 records out 00:03:26.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041819 s, 251 MB/s 00:03:26.388 20:10:41 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.388 20:10:41 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:26.388 20:10:41 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n1 00:03:26.388 20:10:41 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:26.388 20:10:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:26.388 No valid GPT data, bailing 00:03:26.388 20:10:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:26.388 20:10:41 -- scripts/common.sh@393 -- # pt= 00:03:26.388 20:10:41 -- scripts/common.sh@394 -- # return 1 00:03:26.388 20:10:41 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:26.388 1+0 records in 00:03:26.388 1+0 records out 00:03:26.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00528694 s, 198 MB/s 00:03:26.388 20:10:41 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.388 20:10:41 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:26.388 20:10:41 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n2 00:03:26.388 20:10:41 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:26.388 20:10:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:26.388 No valid GPT data, bailing 00:03:26.388 20:10:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:26.388 20:10:41 -- scripts/common.sh@393 -- # pt= 00:03:26.388 20:10:41 -- scripts/common.sh@394 -- # return 1 00:03:26.388 20:10:41 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:26.388 1+0 records in 00:03:26.388 1+0 records out 00:03:26.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00581894 s, 180 MB/s 00:03:26.388 20:10:41 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.388 20:10:41 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:26.388 20:10:41 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n3 00:03:26.388 20:10:41 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:26.388 20:10:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:26.388 No valid GPT data, bailing 00:03:26.388 20:10:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:26.649 20:10:41 -- scripts/common.sh@393 -- # pt= 00:03:26.649 20:10:41 -- scripts/common.sh@394 -- # return 1 00:03:26.649 20:10:41 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:26.649 1+0 records in 00:03:26.649 1+0 records out 00:03:26.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416084 s, 252 MB/s 00:03:26.649 20:10:41 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.649 20:10:41 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:26.649 20:10:41 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n1 00:03:26.649 20:10:41 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:26.649 20:10:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:26.649 No valid GPT data, bailing 00:03:26.649 20:10:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:26.649 20:10:41 -- scripts/common.sh@393 -- # pt= 00:03:26.649 20:10:41 -- scripts/common.sh@394 -- # return 1 00:03:26.649 20:10:41 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:26.649 1+0 records in 00:03:26.649 1+0 records out 00:03:26.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00556663 s, 188 MB/s 00:03:26.649 20:10:41 -- spdk/autotest.sh@129 -- # sync 00:03:27.224 20:10:41 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.224 20:10:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.224 20:10:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:28.610 20:10:43 -- spdk/autotest.sh@135 -- # uname -s 00:03:28.610 20:10:43 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:28.610 20:10:43 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:28.610 20:10:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:28.610 20:10:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:28.610 20:10:43 -- common/autotest_common.sh@10 -- # set +x 00:03:28.610 ************************************ 00:03:28.610 START TEST setup.sh 00:03:28.610 ************************************ 00:03:28.610 20:10:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:28.610 * Looking for test storage... 00:03:28.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:28.610 20:10:43 -- setup/test-setup.sh@10 -- # uname -s 00:03:28.610 20:10:43 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:28.610 20:10:43 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:28.610 20:10:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:28.610 20:10:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:28.610 20:10:43 -- common/autotest_common.sh@10 -- # set +x 00:03:28.610 ************************************ 00:03:28.610 START TEST acl 00:03:28.610 ************************************ 00:03:28.610 20:10:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:28.610 * Looking for test storage... 00:03:28.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:28.611 20:10:43 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:28.611 20:10:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:28.611 20:10:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:28.611 20:10:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:28.611 20:10:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:28.611 20:10:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:28.611 20:10:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:28.611 20:10:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:28.611 20:10:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:28.611 20:10:43 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:28.611 20:10:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:28.611 20:10:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:03:28.611 20:10:43 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:03:28.611 20:10:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:28.611 20:10:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:03:28.611 20:10:43 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:03:28.611 20:10:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:28.611 20:10:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:03:28.611 20:10:43 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:03:28.611 20:10:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:28.611 20:10:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:03:28.611 20:10:43 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:03:28.611 20:10:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:28.611 20:10:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:03:28.611 20:10:43 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:03:28.611 20:10:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:28.611 20:10:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:28.611 20:10:43 -- setup/acl.sh@12 -- # devs=() 00:03:28.611 20:10:43 -- setup/acl.sh@12 -- # declare -a devs 00:03:28.611 20:10:43 -- setup/acl.sh@13 -- # drivers=() 00:03:28.611 20:10:43 -- setup/acl.sh@13 -- # declare -A drivers 00:03:28.611 20:10:43 -- setup/acl.sh@51 -- # setup reset 00:03:28.611 20:10:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.611 20:10:43 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:29.994 20:10:44 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:29.994 20:10:44 -- setup/acl.sh@16 -- # local dev driver 00:03:29.994 20:10:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.994 20:10:44 -- setup/acl.sh@15 -- # setup output status 00:03:29.994 20:10:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.994 20:10:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:29.994 Hugepages 00:03:29.994 node hugesize free / total 00:03:29.994 20:10:44 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:29.994 20:10:44 -- setup/acl.sh@19 -- # continue 00:03:29.994 20:10:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.994 00:03:29.994 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.994 20:10:44 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:29.994 20:10:44 -- setup/acl.sh@19 -- # continue 00:03:29.994 20:10:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.994 20:10:44 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:29.994 20:10:44 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:29.994 20:10:44 -- setup/acl.sh@20 -- # continue 00:03:29.994 20:10:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.994 20:10:44 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:29.994 20:10:44 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:29.994 20:10:44 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:29.994 20:10:44 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:29.994 20:10:44 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:29.994 20:10:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.994 20:10:44 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:29.994 20:10:44 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:29.994 20:10:44 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:29.994 20:10:44 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:29.994 20:10:44 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:29.994 20:10:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.994 20:10:44 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:29.994 20:10:44 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:29.994 20:10:44 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:29.994 20:10:44 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:29.994 20:10:44 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:29.994 20:10:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.254 20:10:44 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:30.254 20:10:44 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:30.254 20:10:44 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:30.254 20:10:44 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:30.254 20:10:44 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:30.254 20:10:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.254 20:10:44 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:30.254 20:10:44 -- setup/acl.sh@54 -- # run_test denied denied 00:03:30.254 20:10:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.254 20:10:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.254 20:10:44 -- common/autotest_common.sh@10 -- # set +x 00:03:30.254 ************************************ 00:03:30.254 START TEST denied 00:03:30.254 ************************************ 00:03:30.254 20:10:44 -- common/autotest_common.sh@1104 -- # denied 00:03:30.254 20:10:44 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:30.254 20:10:44 -- setup/acl.sh@38 -- # setup output config 00:03:30.254 20:10:44 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:30.254 20:10:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.254 20:10:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:31.198 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:31.198 20:10:46 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:31.198 20:10:46 -- setup/acl.sh@28 -- # local dev driver 00:03:31.198 20:10:46 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.198 20:10:46 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:31.198 20:10:46 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:31.198 20:10:46 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.198 20:10:46 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.198 20:10:46 -- setup/acl.sh@41 -- # setup reset 00:03:31.198 20:10:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.198 20:10:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:37.788 00:03:37.788 real 0m7.059s 00:03:37.788 user 0m0.694s 00:03:37.788 sys 0m1.193s 00:03:37.788 20:10:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.788 20:10:52 -- common/autotest_common.sh@10 -- # set +x 00:03:37.788 ************************************ 00:03:37.788 END TEST denied 00:03:37.788 ************************************ 00:03:37.788 20:10:52 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:37.788 20:10:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:37.788 20:10:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:37.788 20:10:52 -- common/autotest_common.sh@10 -- # set +x 00:03:37.788 ************************************ 00:03:37.788 START TEST allowed 00:03:37.788 ************************************ 00:03:37.788 20:10:52 -- common/autotest_common.sh@1104 -- # allowed 00:03:37.788 20:10:52 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:37.788 20:10:52 -- setup/acl.sh@45 -- # setup output config 00:03:37.788 20:10:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.788 20:10:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:37.788 20:10:52 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:38.359 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:38.359 20:10:53 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:38.359 20:10:53 -- setup/acl.sh@28 -- # local dev driver 00:03:38.359 20:10:53 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:38.359 20:10:53 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:38.359 20:10:53 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:38.359 20:10:53 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:38.359 20:10:53 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:38.359 20:10:53 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:38.359 20:10:53 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:03:38.359 20:10:53 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:03:38.359 20:10:53 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:38.359 20:10:53 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:38.359 20:10:53 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:38.359 20:10:53 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:03:38.359 20:10:53 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:03:38.359 20:10:53 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:38.359 20:10:53 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:38.359 20:10:53 -- setup/acl.sh@48 -- # setup reset 00:03:38.359 20:10:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.359 20:10:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.305 ************************************ 00:03:39.305 END TEST allowed 00:03:39.305 ************************************ 00:03:39.305 00:03:39.305 real 0m2.137s 00:03:39.305 user 0m0.827s 00:03:39.305 sys 0m1.041s 00:03:39.305 20:10:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.305 20:10:54 -- common/autotest_common.sh@10 -- # set +x 00:03:39.567 00:03:39.567 real 0m10.859s 00:03:39.567 user 0m2.169s 00:03:39.567 sys 0m3.099s 00:03:39.567 20:10:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.567 20:10:54 -- common/autotest_common.sh@10 -- # set +x 00:03:39.567 ************************************ 00:03:39.567 END TEST acl 00:03:39.567 ************************************ 00:03:39.567 20:10:54 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:39.567 20:10:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:39.567 20:10:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:39.567 20:10:54 -- common/autotest_common.sh@10 -- # set +x 00:03:39.567 ************************************ 00:03:39.567 START TEST hugepages 00:03:39.567 ************************************ 00:03:39.567 20:10:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:39.567 * Looking for test storage... 00:03:39.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:39.567 20:10:54 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:39.567 20:10:54 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:39.567 20:10:54 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:39.567 20:10:54 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:39.567 20:10:54 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:39.567 20:10:54 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:39.567 20:10:54 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:39.567 20:10:54 -- setup/common.sh@18 -- # local node= 00:03:39.567 20:10:54 -- setup/common.sh@19 -- # local var val 00:03:39.567 20:10:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.567 20:10:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.567 20:10:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.567 20:10:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.567 20:10:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.567 20:10:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.567 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.567 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 5808884 kB' 'MemAvailable: 7358744 kB' 'Buffers: 2684 kB' 'Cached: 1762920 kB' 'SwapCached: 0 kB' 'Active: 452360 kB' 'Inactive: 1415860 kB' 'Active(anon): 113152 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 104300 kB' 'Mapped: 50748 kB' 'Shmem: 10532 kB' 'KReclaimable: 63884 kB' 'Slab: 163796 kB' 'SReclaimable: 63884 kB' 'SUnreclaim: 99912 kB' 'KernelStack: 6636 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12409996 kB' 'Committed_AS: 297448 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.568 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.568 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # continue 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.569 20:10:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.569 20:10:54 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.569 20:10:54 -- setup/common.sh@33 -- # echo 2048 00:03:39.569 20:10:54 -- setup/common.sh@33 -- # return 0 00:03:39.569 20:10:54 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:39.569 20:10:54 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:39.569 20:10:54 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:39.569 20:10:54 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:39.569 20:10:54 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:39.569 20:10:54 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:39.569 20:10:54 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:39.569 20:10:54 -- setup/hugepages.sh@207 -- # get_nodes 00:03:39.569 20:10:54 -- setup/hugepages.sh@27 -- # local node 00:03:39.569 20:10:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.569 20:10:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:39.569 20:10:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:39.569 20:10:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.569 20:10:54 -- setup/hugepages.sh@208 -- # clear_hp 00:03:39.569 20:10:54 -- setup/hugepages.sh@37 -- # local node hp 00:03:39.569 20:10:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:39.569 20:10:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.569 20:10:54 -- setup/hugepages.sh@41 -- # echo 0 00:03:39.569 20:10:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.569 20:10:54 -- setup/hugepages.sh@41 -- # echo 0 00:03:39.569 20:10:54 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:39.569 20:10:54 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:39.569 20:10:54 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:39.569 20:10:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:39.569 20:10:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:39.569 20:10:54 -- common/autotest_common.sh@10 -- # set +x 00:03:39.569 ************************************ 00:03:39.569 START TEST default_setup 00:03:39.569 ************************************ 00:03:39.569 20:10:54 -- common/autotest_common.sh@1104 -- # default_setup 00:03:39.569 20:10:54 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:39.569 20:10:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:39.569 20:10:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:39.569 20:10:54 -- setup/hugepages.sh@51 -- # shift 00:03:39.569 20:10:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:39.569 20:10:54 -- setup/hugepages.sh@52 -- # local node_ids 00:03:39.569 20:10:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.569 20:10:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:39.569 20:10:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:39.569 20:10:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:39.569 20:10:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.569 20:10:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.569 20:10:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:39.569 20:10:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.569 20:10:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.569 20:10:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:39.569 20:10:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.569 20:10:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:39.569 20:10:54 -- setup/hugepages.sh@73 -- # return 0 00:03:39.569 20:10:54 -- setup/hugepages.sh@137 -- # setup output 00:03:39.569 20:10:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.569 20:10:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:40.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.792 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.792 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.792 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.792 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.792 20:10:55 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:40.792 20:10:55 -- setup/hugepages.sh@89 -- # local node 00:03:40.792 20:10:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.792 20:10:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.792 20:10:55 -- setup/hugepages.sh@92 -- # local surp 00:03:40.792 20:10:55 -- setup/hugepages.sh@93 -- # local resv 00:03:40.792 20:10:55 -- setup/hugepages.sh@94 -- # local anon 00:03:40.792 20:10:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.792 20:10:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.792 20:10:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.792 20:10:55 -- setup/common.sh@18 -- # local node= 00:03:40.792 20:10:55 -- setup/common.sh@19 -- # local var val 00:03:40.792 20:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.792 20:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.792 20:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.792 20:10:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.792 20:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.792 20:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7879124 kB' 'MemAvailable: 9428772 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467888 kB' 'Inactive: 1415864 kB' 'Active(anon): 128680 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119332 kB' 'Mapped: 50872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163512 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100056 kB' 'KernelStack: 6640 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.792 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.792 20:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.793 20:10:55 -- setup/common.sh@33 -- # echo 0 00:03:40.793 20:10:55 -- setup/common.sh@33 -- # return 0 00:03:40.793 20:10:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.793 20:10:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.793 20:10:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.793 20:10:55 -- setup/common.sh@18 -- # local node= 00:03:40.793 20:10:55 -- setup/common.sh@19 -- # local var val 00:03:40.793 20:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.793 20:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.793 20:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.793 20:10:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.793 20:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.793 20:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7879124 kB' 'MemAvailable: 9428772 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467760 kB' 'Inactive: 1415864 kB' 'Active(anon): 128552 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119204 kB' 'Mapped: 50924 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163508 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100052 kB' 'KernelStack: 6608 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.793 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.793 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.794 20:10:55 -- setup/common.sh@33 -- # echo 0 00:03:40.794 20:10:55 -- setup/common.sh@33 -- # return 0 00:03:40.794 20:10:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.794 20:10:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.794 20:10:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.794 20:10:55 -- setup/common.sh@18 -- # local node= 00:03:40.794 20:10:55 -- setup/common.sh@19 -- # local var val 00:03:40.794 20:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.794 20:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.794 20:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.794 20:10:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.794 20:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.794 20:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7879124 kB' 'MemAvailable: 9428772 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467456 kB' 'Inactive: 1415864 kB' 'Active(anon): 128248 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119420 kB' 'Mapped: 50924 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163508 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100052 kB' 'KernelStack: 6592 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.794 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.794 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.795 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.795 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.795 20:10:55 -- setup/common.sh@33 -- # echo 0 00:03:40.795 20:10:55 -- setup/common.sh@33 -- # return 0 00:03:40.795 nr_hugepages=1024 00:03:40.795 20:10:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.795 20:10:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.795 resv_hugepages=0 00:03:40.795 20:10:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.795 surplus_hugepages=0 00:03:40.795 anon_hugepages=0 00:03:40.796 20:10:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.796 20:10:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.796 20:10:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.796 20:10:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.796 20:10:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.796 20:10:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.796 20:10:55 -- setup/common.sh@18 -- # local node= 00:03:40.796 20:10:55 -- setup/common.sh@19 -- # local var val 00:03:40.796 20:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.796 20:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.796 20:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.796 20:10:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.796 20:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.796 20:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7878620 kB' 'MemAvailable: 9428268 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467348 kB' 'Inactive: 1415864 kB' 'Active(anon): 128140 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119216 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163508 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100052 kB' 'KernelStack: 6560 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.796 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.796 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.797 20:10:55 -- setup/common.sh@33 -- # echo 1024 00:03:40.797 20:10:55 -- setup/common.sh@33 -- # return 0 00:03:40.797 20:10:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.797 20:10:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.797 20:10:55 -- setup/hugepages.sh@27 -- # local node 00:03:40.797 20:10:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.797 20:10:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.797 20:10:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:40.797 20:10:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.797 20:10:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.797 20:10:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.797 20:10:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.797 20:10:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.797 20:10:55 -- setup/common.sh@18 -- # local node=0 00:03:40.797 20:10:55 -- setup/common.sh@19 -- # local var val 00:03:40.797 20:10:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.797 20:10:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.797 20:10:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.797 20:10:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.797 20:10:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.797 20:10:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7878620 kB' 'MemUsed: 4358468 kB' 'SwapCached: 0 kB' 'Active: 467564 kB' 'Inactive: 1415864 kB' 'Active(anon): 128356 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1765592 kB' 'Mapped: 50788 kB' 'AnonPages: 119176 kB' 'Shmem: 10492 kB' 'KernelStack: 6612 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63456 kB' 'Slab: 163508 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.797 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.797 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # continue 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.798 20:10:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.798 20:10:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.798 20:10:55 -- setup/common.sh@33 -- # echo 0 00:03:40.798 20:10:55 -- setup/common.sh@33 -- # return 0 00:03:41.077 node0=1024 expecting 1024 00:03:41.077 20:10:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.077 20:10:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.077 20:10:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.077 20:10:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.077 20:10:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:41.077 20:10:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:41.077 00:03:41.077 real 0m1.261s 00:03:41.077 user 0m0.502s 00:03:41.077 sys 0m0.616s 00:03:41.077 20:10:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.077 20:10:55 -- common/autotest_common.sh@10 -- # set +x 00:03:41.077 ************************************ 00:03:41.077 END TEST default_setup 00:03:41.077 ************************************ 00:03:41.077 20:10:55 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:41.077 20:10:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.077 20:10:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.077 20:10:55 -- common/autotest_common.sh@10 -- # set +x 00:03:41.077 ************************************ 00:03:41.077 START TEST per_node_1G_alloc 00:03:41.077 ************************************ 00:03:41.077 20:10:55 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:41.077 20:10:55 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:41.077 20:10:55 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:41.077 20:10:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:41.077 20:10:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:41.077 20:10:55 -- setup/hugepages.sh@51 -- # shift 00:03:41.077 20:10:55 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:41.077 20:10:55 -- setup/hugepages.sh@52 -- # local node_ids 00:03:41.077 20:10:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.077 20:10:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:41.077 20:10:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:41.077 20:10:55 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:41.077 20:10:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.077 20:10:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:41.077 20:10:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:41.077 20:10:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.077 20:10:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.077 20:10:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:41.077 20:10:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:41.077 20:10:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:41.077 20:10:55 -- setup/hugepages.sh@73 -- # return 0 00:03:41.077 20:10:55 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:41.077 20:10:55 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:41.077 20:10:55 -- setup/hugepages.sh@146 -- # setup output 00:03:41.077 20:10:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.077 20:10:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:41.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.339 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.339 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.339 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.339 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.339 20:10:56 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:41.339 20:10:56 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:41.339 20:10:56 -- setup/hugepages.sh@89 -- # local node 00:03:41.339 20:10:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.339 20:10:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.339 20:10:56 -- setup/hugepages.sh@92 -- # local surp 00:03:41.339 20:10:56 -- setup/hugepages.sh@93 -- # local resv 00:03:41.339 20:10:56 -- setup/hugepages.sh@94 -- # local anon 00:03:41.339 20:10:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.339 20:10:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.339 20:10:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.339 20:10:56 -- setup/common.sh@18 -- # local node= 00:03:41.339 20:10:56 -- setup/common.sh@19 -- # local var val 00:03:41.339 20:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.339 20:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.339 20:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.339 20:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.339 20:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.339 20:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.339 20:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 8933068 kB' 'MemAvailable: 10482744 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467836 kB' 'Inactive: 1415892 kB' 'Active(anon): 128628 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 120064 kB' 'Mapped: 50800 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163592 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100136 kB' 'KernelStack: 6684 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.339 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.339 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.340 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.340 20:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.340 20:10:56 -- setup/common.sh@33 -- # echo 0 00:03:41.340 20:10:56 -- setup/common.sh@33 -- # return 0 00:03:41.604 20:10:56 -- setup/hugepages.sh@97 -- # anon=0 00:03:41.604 20:10:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.604 20:10:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.604 20:10:56 -- setup/common.sh@18 -- # local node= 00:03:41.604 20:10:56 -- setup/common.sh@19 -- # local var val 00:03:41.604 20:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.604 20:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.604 20:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.604 20:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.604 20:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.604 20:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.604 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.604 20:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 8932820 kB' 'MemAvailable: 10482496 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467584 kB' 'Inactive: 1415892 kB' 'Active(anon): 128376 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119508 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163604 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100148 kB' 'KernelStack: 6572 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:41.604 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.604 20:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.605 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.605 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.606 20:10:56 -- setup/common.sh@33 -- # echo 0 00:03:41.606 20:10:56 -- setup/common.sh@33 -- # return 0 00:03:41.606 20:10:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:41.606 20:10:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.606 20:10:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.606 20:10:56 -- setup/common.sh@18 -- # local node= 00:03:41.606 20:10:56 -- setup/common.sh@19 -- # local var val 00:03:41.606 20:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.606 20:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.606 20:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.606 20:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.606 20:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.606 20:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 8932820 kB' 'MemAvailable: 10482496 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467584 kB' 'Inactive: 1415892 kB' 'Active(anon): 128376 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119508 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163604 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100148 kB' 'KernelStack: 6640 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 322144 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.606 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.606 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.607 20:10:56 -- setup/common.sh@33 -- # echo 0 00:03:41.607 20:10:56 -- setup/common.sh@33 -- # return 0 00:03:41.607 20:10:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:41.607 20:10:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:41.607 nr_hugepages=512 00:03:41.607 resv_hugepages=0 00:03:41.607 surplus_hugepages=0 00:03:41.607 anon_hugepages=0 00:03:41.607 20:10:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.607 20:10:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.607 20:10:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.607 20:10:56 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:41.607 20:10:56 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:41.607 20:10:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.607 20:10:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.607 20:10:56 -- setup/common.sh@18 -- # local node= 00:03:41.607 20:10:56 -- setup/common.sh@19 -- # local var val 00:03:41.607 20:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.607 20:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.607 20:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.607 20:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.607 20:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.607 20:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 8933080 kB' 'MemAvailable: 10482756 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467500 kB' 'Inactive: 1415892 kB' 'Active(anon): 128292 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119368 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163592 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100136 kB' 'KernelStack: 6592 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 319224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.607 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.607 20:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.608 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.608 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.609 20:10:56 -- setup/common.sh@33 -- # echo 512 00:03:41.609 20:10:56 -- setup/common.sh@33 -- # return 0 00:03:41.609 20:10:56 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:41.609 20:10:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.609 20:10:56 -- setup/hugepages.sh@27 -- # local node 00:03:41.609 20:10:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.609 20:10:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.609 20:10:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:41.609 20:10:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.609 20:10:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.609 20:10:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.609 20:10:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.609 20:10:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.609 20:10:56 -- setup/common.sh@18 -- # local node=0 00:03:41.609 20:10:56 -- setup/common.sh@19 -- # local var val 00:03:41.609 20:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.609 20:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.609 20:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.609 20:10:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.609 20:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.609 20:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 8933116 kB' 'MemUsed: 3303972 kB' 'SwapCached: 0 kB' 'Active: 467360 kB' 'Inactive: 1415892 kB' 'Active(anon): 128152 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1765592 kB' 'Mapped: 50788 kB' 'AnonPages: 119268 kB' 'Shmem: 10492 kB' 'KernelStack: 6544 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63456 kB' 'Slab: 163612 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.609 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.609 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # continue 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.610 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.610 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.610 20:10:56 -- setup/common.sh@33 -- # echo 0 00:03:41.610 20:10:56 -- setup/common.sh@33 -- # return 0 00:03:41.610 20:10:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.610 20:10:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.610 20:10:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.610 20:10:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.610 node0=512 expecting 512 00:03:41.610 20:10:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.610 20:10:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:41.610 00:03:41.610 real 0m0.563s 00:03:41.610 user 0m0.241s 00:03:41.610 sys 0m0.343s 00:03:41.610 ************************************ 00:03:41.610 20:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.610 20:10:56 -- common/autotest_common.sh@10 -- # set +x 00:03:41.610 END TEST per_node_1G_alloc 00:03:41.610 ************************************ 00:03:41.610 20:10:56 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:41.610 20:10:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.610 20:10:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.610 20:10:56 -- common/autotest_common.sh@10 -- # set +x 00:03:41.610 ************************************ 00:03:41.610 START TEST even_2G_alloc 00:03:41.610 ************************************ 00:03:41.610 20:10:56 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:41.610 20:10:56 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:41.610 20:10:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:41.610 20:10:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:41.610 20:10:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.610 20:10:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:41.610 20:10:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:41.610 20:10:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:41.610 20:10:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.610 20:10:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:41.610 20:10:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:41.610 20:10:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.610 20:10:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.610 20:10:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:41.610 20:10:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:41.610 20:10:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.610 20:10:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:41.610 20:10:56 -- setup/hugepages.sh@83 -- # : 0 00:03:41.610 20:10:56 -- setup/hugepages.sh@84 -- # : 0 00:03:41.610 20:10:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.610 20:10:56 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:41.610 20:10:56 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:41.610 20:10:56 -- setup/hugepages.sh@153 -- # setup output 00:03:41.610 20:10:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.610 20:10:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:42.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.185 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.185 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.185 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.185 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.185 20:10:56 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:42.185 20:10:56 -- setup/hugepages.sh@89 -- # local node 00:03:42.185 20:10:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.185 20:10:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.185 20:10:56 -- setup/hugepages.sh@92 -- # local surp 00:03:42.185 20:10:56 -- setup/hugepages.sh@93 -- # local resv 00:03:42.185 20:10:56 -- setup/hugepages.sh@94 -- # local anon 00:03:42.185 20:10:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.185 20:10:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.185 20:10:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.185 20:10:56 -- setup/common.sh@18 -- # local node= 00:03:42.185 20:10:56 -- setup/common.sh@19 -- # local var val 00:03:42.185 20:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.185 20:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.185 20:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.185 20:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.185 20:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.185 20:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7889412 kB' 'MemAvailable: 9439088 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467956 kB' 'Inactive: 1415892 kB' 'Active(anon): 128748 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119852 kB' 'Mapped: 51032 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163676 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100220 kB' 'KernelStack: 6684 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.185 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.185 20:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.186 20:10:56 -- setup/common.sh@33 -- # echo 0 00:03:42.186 20:10:56 -- setup/common.sh@33 -- # return 0 00:03:42.186 20:10:56 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.186 20:10:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.186 20:10:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.186 20:10:56 -- setup/common.sh@18 -- # local node= 00:03:42.186 20:10:56 -- setup/common.sh@19 -- # local var val 00:03:42.186 20:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.186 20:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.186 20:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.186 20:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.186 20:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.186 20:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7889664 kB' 'MemAvailable: 9439340 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467700 kB' 'Inactive: 1415892 kB' 'Active(anon): 128492 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 50900 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163672 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100216 kB' 'KernelStack: 6604 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.186 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.186 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.187 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.187 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.187 20:10:56 -- setup/common.sh@33 -- # echo 0 00:03:42.187 20:10:56 -- setup/common.sh@33 -- # return 0 00:03:42.188 20:10:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.188 20:10:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.188 20:10:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.188 20:10:56 -- setup/common.sh@18 -- # local node= 00:03:42.188 20:10:56 -- setup/common.sh@19 -- # local var val 00:03:42.188 20:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.188 20:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.188 20:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.188 20:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.188 20:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.188 20:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7889916 kB' 'MemAvailable: 9439592 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467484 kB' 'Inactive: 1415892 kB' 'Active(anon): 128276 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 50888 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163672 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100216 kB' 'KernelStack: 6588 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.188 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.188 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.189 20:10:56 -- setup/common.sh@33 -- # echo 0 00:03:42.189 20:10:56 -- setup/common.sh@33 -- # return 0 00:03:42.189 20:10:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.189 nr_hugepages=1024 00:03:42.189 resv_hugepages=0 00:03:42.189 20:10:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.189 20:10:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.189 surplus_hugepages=0 00:03:42.189 anon_hugepages=0 00:03:42.189 20:10:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.189 20:10:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.189 20:10:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.189 20:10:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.189 20:10:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.189 20:10:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.189 20:10:56 -- setup/common.sh@18 -- # local node= 00:03:42.189 20:10:56 -- setup/common.sh@19 -- # local var val 00:03:42.189 20:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.189 20:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.189 20:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.189 20:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.189 20:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.189 20:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7889916 kB' 'MemAvailable: 9439592 kB' 'Buffers: 2684 kB' 'Cached: 1762908 kB' 'SwapCached: 0 kB' 'Active: 467464 kB' 'Inactive: 1415892 kB' 'Active(anon): 128256 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119344 kB' 'Mapped: 50888 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163672 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100216 kB' 'KernelStack: 6588 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.189 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.189 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.190 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.190 20:10:56 -- setup/common.sh@33 -- # echo 1024 00:03:42.190 20:10:56 -- setup/common.sh@33 -- # return 0 00:03:42.190 20:10:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.190 20:10:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.190 20:10:56 -- setup/hugepages.sh@27 -- # local node 00:03:42.190 20:10:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.190 20:10:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.190 20:10:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:42.190 20:10:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.190 20:10:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.190 20:10:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.190 20:10:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.190 20:10:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.190 20:10:56 -- setup/common.sh@18 -- # local node=0 00:03:42.190 20:10:56 -- setup/common.sh@19 -- # local var val 00:03:42.190 20:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.190 20:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.190 20:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.190 20:10:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.190 20:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.190 20:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.190 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7889916 kB' 'MemUsed: 4347172 kB' 'SwapCached: 0 kB' 'Active: 467336 kB' 'Inactive: 1415892 kB' 'Active(anon): 128128 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1765592 kB' 'Mapped: 50888 kB' 'AnonPages: 119220 kB' 'Shmem: 10492 kB' 'KernelStack: 6624 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63456 kB' 'Slab: 163664 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # continue 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.191 20:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.191 20:10:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.191 20:10:56 -- setup/common.sh@33 -- # echo 0 00:03:42.191 20:10:56 -- setup/common.sh@33 -- # return 0 00:03:42.191 20:10:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.191 20:10:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.191 20:10:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.191 20:10:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.191 node0=1024 expecting 1024 00:03:42.191 20:10:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:42.191 20:10:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.191 00:03:42.191 real 0m0.568s 00:03:42.191 user 0m0.250s 00:03:42.191 sys 0m0.342s 00:03:42.191 ************************************ 00:03:42.191 END TEST even_2G_alloc 00:03:42.191 ************************************ 00:03:42.191 20:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.191 20:10:56 -- common/autotest_common.sh@10 -- # set +x 00:03:42.191 20:10:57 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:42.192 20:10:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.192 20:10:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.192 20:10:57 -- common/autotest_common.sh@10 -- # set +x 00:03:42.192 ************************************ 00:03:42.192 START TEST odd_alloc 00:03:42.192 ************************************ 00:03:42.192 20:10:57 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:42.192 20:10:57 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:42.192 20:10:57 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:42.192 20:10:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.192 20:10:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.192 20:10:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:42.192 20:10:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.192 20:10:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.192 20:10:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.192 20:10:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:42.192 20:10:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:42.192 20:10:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.192 20:10:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.192 20:10:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.192 20:10:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.192 20:10:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.192 20:10:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:42.192 20:10:57 -- setup/hugepages.sh@83 -- # : 0 00:03:42.192 20:10:57 -- setup/hugepages.sh@84 -- # : 0 00:03:42.192 20:10:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.192 20:10:57 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:42.192 20:10:57 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:42.192 20:10:57 -- setup/hugepages.sh@160 -- # setup output 00:03:42.192 20:10:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.192 20:10:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:42.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.783 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.783 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.783 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.783 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.783 20:10:57 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:42.783 20:10:57 -- setup/hugepages.sh@89 -- # local node 00:03:42.783 20:10:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.783 20:10:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.783 20:10:57 -- setup/hugepages.sh@92 -- # local surp 00:03:42.783 20:10:57 -- setup/hugepages.sh@93 -- # local resv 00:03:42.783 20:10:57 -- setup/hugepages.sh@94 -- # local anon 00:03:42.783 20:10:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.783 20:10:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.783 20:10:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.783 20:10:57 -- setup/common.sh@18 -- # local node= 00:03:42.783 20:10:57 -- setup/common.sh@19 -- # local var val 00:03:42.783 20:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.783 20:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.783 20:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.783 20:10:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.783 20:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.783 20:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.783 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.783 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7893136 kB' 'MemAvailable: 9442816 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467616 kB' 'Inactive: 1415896 kB' 'Active(anon): 128408 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119468 kB' 'Mapped: 50904 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163768 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100312 kB' 'KernelStack: 6592 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.784 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.784 20:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.785 20:10:57 -- setup/common.sh@33 -- # echo 0 00:03:42.785 20:10:57 -- setup/common.sh@33 -- # return 0 00:03:42.785 20:10:57 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.785 20:10:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.785 20:10:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.785 20:10:57 -- setup/common.sh@18 -- # local node= 00:03:42.785 20:10:57 -- setup/common.sh@19 -- # local var val 00:03:42.785 20:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.785 20:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.785 20:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.785 20:10:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.785 20:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.785 20:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7893136 kB' 'MemAvailable: 9442816 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467264 kB' 'Inactive: 1415896 kB' 'Active(anon): 128056 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119136 kB' 'Mapped: 50792 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163736 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100280 kB' 'KernelStack: 6608 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.785 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.785 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.786 20:10:57 -- setup/common.sh@33 -- # echo 0 00:03:42.786 20:10:57 -- setup/common.sh@33 -- # return 0 00:03:42.786 20:10:57 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.786 20:10:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.786 20:10:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.786 20:10:57 -- setup/common.sh@18 -- # local node= 00:03:42.786 20:10:57 -- setup/common.sh@19 -- # local var val 00:03:42.786 20:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.786 20:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.786 20:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.786 20:10:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.786 20:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.786 20:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7893136 kB' 'MemAvailable: 9442816 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467648 kB' 'Inactive: 1415896 kB' 'Active(anon): 128440 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119532 kB' 'Mapped: 50792 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163728 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100272 kB' 'KernelStack: 6576 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.786 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.786 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.787 20:10:57 -- setup/common.sh@33 -- # echo 0 00:03:42.787 20:10:57 -- setup/common.sh@33 -- # return 0 00:03:42.787 nr_hugepages=1025 00:03:42.787 resv_hugepages=0 00:03:42.787 surplus_hugepages=0 00:03:42.787 20:10:57 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.787 20:10:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:42.787 20:10:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.787 20:10:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.787 anon_hugepages=0 00:03:42.787 20:10:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.787 20:10:57 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:42.787 20:10:57 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:42.787 20:10:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.787 20:10:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.787 20:10:57 -- setup/common.sh@18 -- # local node= 00:03:42.787 20:10:57 -- setup/common.sh@19 -- # local var val 00:03:42.787 20:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.787 20:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.787 20:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.787 20:10:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.787 20:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.787 20:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7894888 kB' 'MemAvailable: 9444568 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467672 kB' 'Inactive: 1415896 kB' 'Active(anon): 128464 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119508 kB' 'Mapped: 50792 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163724 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100268 kB' 'KernelStack: 6576 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.787 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.787 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.788 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.788 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.789 20:10:57 -- setup/common.sh@33 -- # echo 1025 00:03:42.789 20:10:57 -- setup/common.sh@33 -- # return 0 00:03:42.789 20:10:57 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:42.789 20:10:57 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.789 20:10:57 -- setup/hugepages.sh@27 -- # local node 00:03:42.789 20:10:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.789 20:10:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:42.789 20:10:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:42.789 20:10:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.789 20:10:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.789 20:10:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.789 20:10:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.789 20:10:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.789 20:10:57 -- setup/common.sh@18 -- # local node=0 00:03:42.789 20:10:57 -- setup/common.sh@19 -- # local var val 00:03:42.789 20:10:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.789 20:10:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.789 20:10:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.789 20:10:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.789 20:10:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.789 20:10:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7894888 kB' 'MemUsed: 4342200 kB' 'SwapCached: 0 kB' 'Active: 467224 kB' 'Inactive: 1415896 kB' 'Active(anon): 128016 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1765596 kB' 'Mapped: 50792 kB' 'AnonPages: 119372 kB' 'Shmem: 10492 kB' 'KernelStack: 6592 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63456 kB' 'Slab: 163724 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.789 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.789 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # continue 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.790 20:10:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.790 20:10:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.790 20:10:57 -- setup/common.sh@33 -- # echo 0 00:03:42.790 20:10:57 -- setup/common.sh@33 -- # return 0 00:03:42.790 20:10:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.790 20:10:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.790 20:10:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.790 node0=1025 expecting 1025 00:03:42.790 20:10:57 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:42.790 20:10:57 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:42.790 00:03:42.790 real 0m0.593s 00:03:42.790 user 0m0.261s 00:03:42.790 sys 0m0.340s 00:03:42.790 20:10:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.790 ************************************ 00:03:42.790 END TEST odd_alloc 00:03:42.790 ************************************ 00:03:42.790 20:10:57 -- common/autotest_common.sh@10 -- # set +x 00:03:42.790 20:10:57 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:42.790 20:10:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.790 20:10:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.790 20:10:57 -- common/autotest_common.sh@10 -- # set +x 00:03:42.790 ************************************ 00:03:42.790 START TEST custom_alloc 00:03:42.790 ************************************ 00:03:42.790 20:10:57 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:42.790 20:10:57 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:42.790 20:10:57 -- setup/hugepages.sh@169 -- # local node 00:03:42.790 20:10:57 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:42.790 20:10:57 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:42.790 20:10:57 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:42.790 20:10:57 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:42.790 20:10:57 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:42.790 20:10:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:42.790 20:10:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.790 20:10:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.790 20:10:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.790 20:10:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:42.790 20:10:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:42.790 20:10:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.790 20:10:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.790 20:10:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.790 20:10:57 -- setup/hugepages.sh@83 -- # : 0 00:03:42.790 20:10:57 -- setup/hugepages.sh@84 -- # : 0 00:03:42.790 20:10:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:42.790 20:10:57 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:42.790 20:10:57 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:42.790 20:10:57 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:42.790 20:10:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.790 20:10:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.790 20:10:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:42.790 20:10:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:42.790 20:10:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.790 20:10:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.790 20:10:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:42.790 20:10:57 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.790 20:10:57 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:42.790 20:10:57 -- setup/hugepages.sh@78 -- # return 0 00:03:42.790 20:10:57 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:42.790 20:10:57 -- setup/hugepages.sh@187 -- # setup output 00:03:42.790 20:10:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.790 20:10:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.366 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.366 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.366 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.366 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.366 20:10:58 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:43.366 20:10:58 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:43.366 20:10:58 -- setup/hugepages.sh@89 -- # local node 00:03:43.366 20:10:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.366 20:10:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.366 20:10:58 -- setup/hugepages.sh@92 -- # local surp 00:03:43.366 20:10:58 -- setup/hugepages.sh@93 -- # local resv 00:03:43.367 20:10:58 -- setup/hugepages.sh@94 -- # local anon 00:03:43.367 20:10:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.367 20:10:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.367 20:10:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.367 20:10:58 -- setup/common.sh@18 -- # local node= 00:03:43.367 20:10:58 -- setup/common.sh@19 -- # local var val 00:03:43.367 20:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.367 20:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.367 20:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.367 20:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.367 20:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.367 20:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 8948272 kB' 'MemAvailable: 10497952 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467756 kB' 'Inactive: 1415896 kB' 'Active(anon): 128548 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119396 kB' 'Mapped: 50992 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163708 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100252 kB' 'KernelStack: 6596 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.367 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.367 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.368 20:10:58 -- setup/common.sh@33 -- # echo 0 00:03:43.368 20:10:58 -- setup/common.sh@33 -- # return 0 00:03:43.368 20:10:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:43.368 20:10:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.368 20:10:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.368 20:10:58 -- setup/common.sh@18 -- # local node= 00:03:43.368 20:10:58 -- setup/common.sh@19 -- # local var val 00:03:43.368 20:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.368 20:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.368 20:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.368 20:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.368 20:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.368 20:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 8948024 kB' 'MemAvailable: 10497704 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467580 kB' 'Inactive: 1415896 kB' 'Active(anon): 128372 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119456 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163684 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100228 kB' 'KernelStack: 6592 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.368 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.368 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.369 20:10:58 -- setup/common.sh@33 -- # echo 0 00:03:43.369 20:10:58 -- setup/common.sh@33 -- # return 0 00:03:43.369 20:10:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:43.369 20:10:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.369 20:10:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.369 20:10:58 -- setup/common.sh@18 -- # local node= 00:03:43.369 20:10:58 -- setup/common.sh@19 -- # local var val 00:03:43.369 20:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.369 20:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.369 20:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.369 20:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.369 20:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.369 20:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 8948024 kB' 'MemAvailable: 10497704 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467648 kB' 'Inactive: 1415896 kB' 'Active(anon): 128440 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119568 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163684 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100228 kB' 'KernelStack: 6608 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.369 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.369 20:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.370 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.370 20:10:58 -- setup/common.sh@33 -- # echo 0 00:03:43.370 20:10:58 -- setup/common.sh@33 -- # return 0 00:03:43.370 nr_hugepages=512 00:03:43.370 resv_hugepages=0 00:03:43.370 surplus_hugepages=0 00:03:43.370 20:10:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:43.370 20:10:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:43.370 20:10:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.370 20:10:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.370 20:10:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.370 anon_hugepages=0 00:03:43.370 20:10:58 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:43.370 20:10:58 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:43.370 20:10:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.370 20:10:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.370 20:10:58 -- setup/common.sh@18 -- # local node= 00:03:43.370 20:10:58 -- setup/common.sh@19 -- # local var val 00:03:43.370 20:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.370 20:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.370 20:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.370 20:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.370 20:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.370 20:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.370 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 8947268 kB' 'MemAvailable: 10496948 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467640 kB' 'Inactive: 1415896 kB' 'Active(anon): 128432 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119480 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163684 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100228 kB' 'KernelStack: 6560 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.371 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.371 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.372 20:10:58 -- setup/common.sh@33 -- # echo 512 00:03:43.372 20:10:58 -- setup/common.sh@33 -- # return 0 00:03:43.372 20:10:58 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:43.372 20:10:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.372 20:10:58 -- setup/hugepages.sh@27 -- # local node 00:03:43.372 20:10:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.372 20:10:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.372 20:10:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.372 20:10:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.372 20:10:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.372 20:10:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.372 20:10:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.372 20:10:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.372 20:10:58 -- setup/common.sh@18 -- # local node=0 00:03:43.372 20:10:58 -- setup/common.sh@19 -- # local var val 00:03:43.372 20:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.372 20:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.372 20:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.372 20:10:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.372 20:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.372 20:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 8947268 kB' 'MemUsed: 3289820 kB' 'SwapCached: 0 kB' 'Active: 467444 kB' 'Inactive: 1415896 kB' 'Active(anon): 128236 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1765596 kB' 'Mapped: 50788 kB' 'AnonPages: 119312 kB' 'Shmem: 10492 kB' 'KernelStack: 6560 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63456 kB' 'Slab: 163684 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.372 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.372 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.373 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.373 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.373 20:10:58 -- setup/common.sh@33 -- # echo 0 00:03:43.373 20:10:58 -- setup/common.sh@33 -- # return 0 00:03:43.373 node0=512 expecting 512 00:03:43.373 20:10:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.373 20:10:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.373 20:10:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.373 20:10:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.373 20:10:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:43.373 20:10:58 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:43.373 00:03:43.373 real 0m0.588s 00:03:43.373 user 0m0.245s 00:03:43.373 sys 0m0.352s 00:03:43.373 ************************************ 00:03:43.373 END TEST custom_alloc 00:03:43.373 ************************************ 00:03:43.373 20:10:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.373 20:10:58 -- common/autotest_common.sh@10 -- # set +x 00:03:43.634 20:10:58 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:43.634 20:10:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.634 20:10:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.634 20:10:58 -- common/autotest_common.sh@10 -- # set +x 00:03:43.634 ************************************ 00:03:43.634 START TEST no_shrink_alloc 00:03:43.634 ************************************ 00:03:43.634 20:10:58 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:43.634 20:10:58 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:43.634 20:10:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.634 20:10:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:43.634 20:10:58 -- setup/hugepages.sh@51 -- # shift 00:03:43.634 20:10:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:43.634 20:10:58 -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.634 20:10:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.634 20:10:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.634 20:10:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:43.634 20:10:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:43.634 20:10:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.634 20:10:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.634 20:10:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.634 20:10:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.634 20:10:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.634 20:10:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:43.634 20:10:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.634 20:10:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:43.634 20:10:58 -- setup/hugepages.sh@73 -- # return 0 00:03:43.634 20:10:58 -- setup/hugepages.sh@198 -- # setup output 00:03:43.634 20:10:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.634 20:10:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.896 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.896 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.896 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.896 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.896 20:10:58 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:43.896 20:10:58 -- setup/hugepages.sh@89 -- # local node 00:03:43.896 20:10:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.896 20:10:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.896 20:10:58 -- setup/hugepages.sh@92 -- # local surp 00:03:43.896 20:10:58 -- setup/hugepages.sh@93 -- # local resv 00:03:43.896 20:10:58 -- setup/hugepages.sh@94 -- # local anon 00:03:43.896 20:10:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.896 20:10:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.896 20:10:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.896 20:10:58 -- setup/common.sh@18 -- # local node= 00:03:43.896 20:10:58 -- setup/common.sh@19 -- # local var val 00:03:43.896 20:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.896 20:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.896 20:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.896 20:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.896 20:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.896 20:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.896 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.896 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.896 20:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7911512 kB' 'MemAvailable: 9461192 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467944 kB' 'Inactive: 1415896 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119836 kB' 'Mapped: 51024 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163596 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100140 kB' 'KernelStack: 6700 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.896 20:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.896 20:10:58 -- setup/common.sh@32 -- # continue 00:03:43.896 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.896 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.896 20:10:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.161 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.161 20:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.161 20:10:58 -- setup/common.sh@33 -- # echo 0 00:03:44.161 20:10:58 -- setup/common.sh@33 -- # return 0 00:03:44.161 20:10:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.161 20:10:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.161 20:10:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.161 20:10:58 -- setup/common.sh@18 -- # local node= 00:03:44.161 20:10:58 -- setup/common.sh@19 -- # local var val 00:03:44.161 20:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.161 20:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.161 20:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.162 20:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.162 20:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.162 20:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7912044 kB' 'MemAvailable: 9461724 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467684 kB' 'Inactive: 1415896 kB' 'Active(anon): 128476 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163552 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100096 kB' 'KernelStack: 6576 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.162 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.162 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.163 20:10:58 -- setup/common.sh@33 -- # echo 0 00:03:44.163 20:10:58 -- setup/common.sh@33 -- # return 0 00:03:44.163 20:10:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.163 20:10:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.163 20:10:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.163 20:10:58 -- setup/common.sh@18 -- # local node= 00:03:44.163 20:10:58 -- setup/common.sh@19 -- # local var val 00:03:44.163 20:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.163 20:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.163 20:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.163 20:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.163 20:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.163 20:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7912044 kB' 'MemAvailable: 9461724 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 467548 kB' 'Inactive: 1415896 kB' 'Active(anon): 128340 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 50788 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163552 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100096 kB' 'KernelStack: 6624 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 319592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.163 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.163 20:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.164 20:10:58 -- setup/common.sh@33 -- # echo 0 00:03:44.164 20:10:58 -- setup/common.sh@33 -- # return 0 00:03:44.164 nr_hugepages=1024 00:03:44.164 20:10:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.164 20:10:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.164 resv_hugepages=0 00:03:44.164 surplus_hugepages=0 00:03:44.164 20:10:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.164 20:10:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.164 anon_hugepages=0 00:03:44.164 20:10:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.164 20:10:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.164 20:10:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.164 20:10:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.164 20:10:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.164 20:10:58 -- setup/common.sh@18 -- # local node= 00:03:44.164 20:10:58 -- setup/common.sh@19 -- # local var val 00:03:44.164 20:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.164 20:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.164 20:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.164 20:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.164 20:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.164 20:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7911036 kB' 'MemAvailable: 9460716 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 465752 kB' 'Inactive: 1415896 kB' 'Active(anon): 126544 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117464 kB' 'Mapped: 50008 kB' 'Shmem: 10492 kB' 'KReclaimable: 63456 kB' 'Slab: 163548 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100092 kB' 'KernelStack: 6624 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 303848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.164 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.164 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.165 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.165 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.166 20:10:58 -- setup/common.sh@33 -- # echo 1024 00:03:44.166 20:10:58 -- setup/common.sh@33 -- # return 0 00:03:44.166 20:10:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.166 20:10:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.166 20:10:58 -- setup/hugepages.sh@27 -- # local node 00:03:44.166 20:10:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.166 20:10:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.166 20:10:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.166 20:10:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.166 20:10:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.166 20:10:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.166 20:10:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.166 20:10:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.166 20:10:58 -- setup/common.sh@18 -- # local node=0 00:03:44.166 20:10:58 -- setup/common.sh@19 -- # local var val 00:03:44.166 20:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.166 20:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.166 20:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.166 20:10:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.166 20:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.166 20:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7911852 kB' 'MemUsed: 4325236 kB' 'SwapCached: 0 kB' 'Active: 465420 kB' 'Inactive: 1415896 kB' 'Active(anon): 126212 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1765596 kB' 'Mapped: 49952 kB' 'AnonPages: 117392 kB' 'Shmem: 10492 kB' 'KernelStack: 6560 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63456 kB' 'Slab: 163492 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 100036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.166 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.166 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # continue 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.167 20:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.167 20:10:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.167 20:10:58 -- setup/common.sh@33 -- # echo 0 00:03:44.167 20:10:58 -- setup/common.sh@33 -- # return 0 00:03:44.167 20:10:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.167 20:10:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.167 20:10:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.167 node0=1024 expecting 1024 00:03:44.167 20:10:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.167 20:10:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.167 20:10:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.167 20:10:58 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:44.167 20:10:58 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:44.167 20:10:58 -- setup/hugepages.sh@202 -- # setup output 00:03:44.167 20:10:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.167 20:10:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.699 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.699 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.699 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.699 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.699 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:44.699 20:10:59 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:44.699 20:10:59 -- setup/hugepages.sh@89 -- # local node 00:03:44.699 20:10:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.699 20:10:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.699 20:10:59 -- setup/hugepages.sh@92 -- # local surp 00:03:44.699 20:10:59 -- setup/hugepages.sh@93 -- # local resv 00:03:44.699 20:10:59 -- setup/hugepages.sh@94 -- # local anon 00:03:44.699 20:10:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.699 20:10:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.699 20:10:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.699 20:10:59 -- setup/common.sh@18 -- # local node= 00:03:44.699 20:10:59 -- setup/common.sh@19 -- # local var val 00:03:44.699 20:10:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.699 20:10:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.699 20:10:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.699 20:10:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.699 20:10:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.699 20:10:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.699 20:10:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7905860 kB' 'MemAvailable: 9455536 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 465728 kB' 'Inactive: 1415896 kB' 'Active(anon): 126520 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117568 kB' 'Mapped: 50344 kB' 'Shmem: 10492 kB' 'KReclaimable: 63448 kB' 'Slab: 163296 kB' 'SReclaimable: 63448 kB' 'SUnreclaim: 99848 kB' 'KernelStack: 6680 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 303848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.699 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.699 20:10:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.700 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.700 20:10:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.701 20:10:59 -- setup/common.sh@33 -- # echo 0 00:03:44.701 20:10:59 -- setup/common.sh@33 -- # return 0 00:03:44.701 20:10:59 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.701 20:10:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.701 20:10:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.701 20:10:59 -- setup/common.sh@18 -- # local node= 00:03:44.701 20:10:59 -- setup/common.sh@19 -- # local var val 00:03:44.701 20:10:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.701 20:10:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.701 20:10:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.701 20:10:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.701 20:10:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.701 20:10:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7906104 kB' 'MemAvailable: 9455780 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 465676 kB' 'Inactive: 1415896 kB' 'Active(anon): 126468 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117544 kB' 'Mapped: 49992 kB' 'Shmem: 10492 kB' 'KReclaimable: 63448 kB' 'Slab: 163292 kB' 'SReclaimable: 63448 kB' 'SUnreclaim: 99844 kB' 'KernelStack: 6576 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 303848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.701 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.701 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.702 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.702 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.703 20:10:59 -- setup/common.sh@33 -- # echo 0 00:03:44.703 20:10:59 -- setup/common.sh@33 -- # return 0 00:03:44.703 20:10:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.703 20:10:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.703 20:10:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.703 20:10:59 -- setup/common.sh@18 -- # local node= 00:03:44.703 20:10:59 -- setup/common.sh@19 -- # local var val 00:03:44.703 20:10:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.703 20:10:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.703 20:10:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.703 20:10:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.703 20:10:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.703 20:10:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7905600 kB' 'MemAvailable: 9455276 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 465288 kB' 'Inactive: 1415896 kB' 'Active(anon): 126080 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117164 kB' 'Mapped: 49952 kB' 'Shmem: 10492 kB' 'KReclaimable: 63448 kB' 'Slab: 163292 kB' 'SReclaimable: 63448 kB' 'SUnreclaim: 99844 kB' 'KernelStack: 6544 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 303848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.703 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.703 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.704 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.704 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.705 20:10:59 -- setup/common.sh@33 -- # echo 0 00:03:44.705 20:10:59 -- setup/common.sh@33 -- # return 0 00:03:44.705 20:10:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.705 nr_hugepages=1024 00:03:44.705 resv_hugepages=0 00:03:44.705 20:10:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.705 20:10:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.705 surplus_hugepages=0 00:03:44.705 anon_hugepages=0 00:03:44.705 20:10:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.705 20:10:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.705 20:10:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.705 20:10:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.705 20:10:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.705 20:10:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.705 20:10:59 -- setup/common.sh@18 -- # local node= 00:03:44.705 20:10:59 -- setup/common.sh@19 -- # local var val 00:03:44.705 20:10:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.705 20:10:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.705 20:10:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.705 20:10:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.705 20:10:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.705 20:10:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7905600 kB' 'MemAvailable: 9455276 kB' 'Buffers: 2684 kB' 'Cached: 1762912 kB' 'SwapCached: 0 kB' 'Active: 465288 kB' 'Inactive: 1415896 kB' 'Active(anon): 126080 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117164 kB' 'Mapped: 49952 kB' 'Shmem: 10492 kB' 'KReclaimable: 63448 kB' 'Slab: 163292 kB' 'SReclaimable: 63448 kB' 'SUnreclaim: 99844 kB' 'KernelStack: 6544 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 303848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.705 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.705 20:10:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.706 20:10:59 -- setup/common.sh@33 -- # echo 1024 00:03:44.706 20:10:59 -- setup/common.sh@33 -- # return 0 00:03:44.706 20:10:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.706 20:10:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.706 20:10:59 -- setup/hugepages.sh@27 -- # local node 00:03:44.706 20:10:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.706 20:10:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.706 20:10:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.706 20:10:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.706 20:10:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.706 20:10:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.706 20:10:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.706 20:10:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.706 20:10:59 -- setup/common.sh@18 -- # local node=0 00:03:44.706 20:10:59 -- setup/common.sh@19 -- # local var val 00:03:44.706 20:10:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.706 20:10:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.706 20:10:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.706 20:10:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.706 20:10:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.706 20:10:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7906048 kB' 'MemUsed: 4331040 kB' 'SwapCached: 0 kB' 'Active: 465144 kB' 'Inactive: 1415896 kB' 'Active(anon): 125936 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1415896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1765596 kB' 'Mapped: 49952 kB' 'AnonPages: 117016 kB' 'Shmem: 10492 kB' 'KernelStack: 6512 kB' 'PageTables: 3588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63448 kB' 'Slab: 163292 kB' 'SReclaimable: 63448 kB' 'SUnreclaim: 99844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.706 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.706 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # continue 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.707 20:10:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.707 20:10:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.707 20:10:59 -- setup/common.sh@33 -- # echo 0 00:03:44.707 20:10:59 -- setup/common.sh@33 -- # return 0 00:03:44.707 20:10:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.707 20:10:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.707 20:10:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.707 20:10:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.707 node0=1024 expecting 1024 00:03:44.707 20:10:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.707 ************************************ 00:03:44.707 END TEST no_shrink_alloc 00:03:44.707 ************************************ 00:03:44.707 20:10:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.707 00:03:44.707 real 0m1.138s 00:03:44.707 user 0m0.495s 00:03:44.707 sys 0m0.681s 00:03:44.707 20:10:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.707 20:10:59 -- common/autotest_common.sh@10 -- # set +x 00:03:44.707 20:10:59 -- setup/hugepages.sh@217 -- # clear_hp 00:03:44.707 20:10:59 -- setup/hugepages.sh@37 -- # local node hp 00:03:44.707 20:10:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.707 20:10:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.707 20:10:59 -- setup/hugepages.sh@41 -- # echo 0 00:03:44.707 20:10:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.707 20:10:59 -- setup/hugepages.sh@41 -- # echo 0 00:03:44.707 20:10:59 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:44.707 20:10:59 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:44.707 ************************************ 00:03:44.707 END TEST hugepages 00:03:44.707 ************************************ 00:03:44.707 00:03:44.707 real 0m5.211s 00:03:44.707 user 0m2.123s 00:03:44.707 sys 0m2.910s 00:03:44.707 20:10:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.707 20:10:59 -- common/autotest_common.sh@10 -- # set +x 00:03:44.707 20:10:59 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:44.707 20:10:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.707 20:10:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.707 20:10:59 -- common/autotest_common.sh@10 -- # set +x 00:03:44.707 ************************************ 00:03:44.707 START TEST driver 00:03:44.707 ************************************ 00:03:44.707 20:10:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:44.968 * Looking for test storage... 00:03:44.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:44.968 20:10:59 -- setup/driver.sh@68 -- # setup reset 00:03:44.968 20:10:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.968 20:10:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.559 20:11:05 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:51.559 20:11:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.559 20:11:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.559 20:11:05 -- common/autotest_common.sh@10 -- # set +x 00:03:51.559 ************************************ 00:03:51.559 START TEST guess_driver 00:03:51.559 ************************************ 00:03:51.559 20:11:05 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:51.559 20:11:05 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:51.559 20:11:05 -- setup/driver.sh@47 -- # local fail=0 00:03:51.559 20:11:05 -- setup/driver.sh@49 -- # pick_driver 00:03:51.559 20:11:05 -- setup/driver.sh@36 -- # vfio 00:03:51.559 20:11:05 -- setup/driver.sh@21 -- # local iommu_grups 00:03:51.559 20:11:05 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:51.559 20:11:05 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:51.559 20:11:05 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:51.559 20:11:05 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:51.559 20:11:05 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:51.559 20:11:05 -- setup/driver.sh@32 -- # return 1 00:03:51.559 20:11:05 -- setup/driver.sh@38 -- # uio 00:03:51.559 20:11:05 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:51.559 20:11:05 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:51.559 20:11:05 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:51.559 20:11:05 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:51.559 20:11:05 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:51.559 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:51.559 20:11:05 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:51.559 20:11:05 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:51.559 20:11:05 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:51.559 Looking for driver=uio_pci_generic 00:03:51.559 20:11:05 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:51.559 20:11:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.559 20:11:05 -- setup/driver.sh@45 -- # setup output config 00:03:51.559 20:11:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.559 20:11:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.819 20:11:06 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:51.819 20:11:06 -- setup/driver.sh@58 -- # continue 00:03:51.819 20:11:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.819 20:11:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:51.819 20:11:06 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:51.819 20:11:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.819 20:11:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:51.819 20:11:06 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:51.819 20:11:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.820 20:11:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:51.820 20:11:06 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:51.820 20:11:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.820 20:11:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:51.820 20:11:06 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:51.820 20:11:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.078 20:11:06 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:52.078 20:11:06 -- setup/driver.sh@65 -- # setup reset 00:03:52.078 20:11:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.078 20:11:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.633 00:03:58.633 real 0m7.006s 00:03:58.633 user 0m0.684s 00:03:58.633 sys 0m1.238s 00:03:58.633 20:11:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.633 20:11:12 -- common/autotest_common.sh@10 -- # set +x 00:03:58.633 ************************************ 00:03:58.633 END TEST guess_driver 00:03:58.633 ************************************ 00:03:58.633 00:03:58.633 real 0m13.028s 00:03:58.633 user 0m1.004s 00:03:58.633 sys 0m1.962s 00:03:58.633 20:11:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.633 20:11:12 -- common/autotest_common.sh@10 -- # set +x 00:03:58.633 ************************************ 00:03:58.633 END TEST driver 00:03:58.633 ************************************ 00:03:58.633 20:11:12 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:58.633 20:11:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:58.633 20:11:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:58.633 20:11:12 -- common/autotest_common.sh@10 -- # set +x 00:03:58.633 ************************************ 00:03:58.633 START TEST devices 00:03:58.633 ************************************ 00:03:58.633 20:11:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:58.633 * Looking for test storage... 00:03:58.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:58.633 20:11:12 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:58.633 20:11:12 -- setup/devices.sh@192 -- # setup reset 00:03:58.633 20:11:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.634 20:11:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.202 20:11:13 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:59.202 20:11:13 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:59.202 20:11:13 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:59.202 20:11:13 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:59.202 20:11:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:59.202 20:11:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:03:59.202 20:11:13 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:03:59.202 20:11:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:59.202 20:11:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:59.202 20:11:13 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:59.202 20:11:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:59.202 20:11:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:59.202 20:11:13 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:59.202 20:11:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:59.202 20:11:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:03:59.202 20:11:13 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:03:59.202 20:11:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:59.202 20:11:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:03:59.202 20:11:13 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:03:59.202 20:11:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:59.202 20:11:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:03:59.202 20:11:13 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:03:59.202 20:11:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:59.202 20:11:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:03:59.202 20:11:13 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:03:59.202 20:11:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:59.202 20:11:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:59.202 20:11:13 -- setup/devices.sh@196 -- # blocks=() 00:03:59.202 20:11:13 -- setup/devices.sh@196 -- # declare -a blocks 00:03:59.202 20:11:13 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:59.202 20:11:13 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:59.202 20:11:13 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:59.202 20:11:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.202 20:11:13 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:59.202 20:11:13 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:59.202 20:11:13 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:03:59.202 20:11:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:59.202 20:11:13 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:59.202 20:11:13 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:59.202 20:11:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:59.202 No valid GPT data, bailing 00:03:59.202 20:11:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.202 20:11:13 -- scripts/common.sh@393 -- # pt= 00:03:59.202 20:11:13 -- scripts/common.sh@394 -- # return 1 00:03:59.202 20:11:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:59.202 20:11:13 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:59.202 20:11:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:59.202 20:11:13 -- setup/common.sh@80 -- # echo 1073741824 00:03:59.202 20:11:13 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:03:59.202 20:11:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.202 20:11:13 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:59.202 20:11:13 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:59.202 20:11:13 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:59.202 20:11:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:59.202 20:11:13 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:59.202 20:11:13 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:59.202 20:11:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:59.202 No valid GPT data, bailing 00:03:59.202 20:11:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:59.202 20:11:13 -- scripts/common.sh@393 -- # pt= 00:03:59.202 20:11:13 -- scripts/common.sh@394 -- # return 1 00:03:59.202 20:11:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:59.202 20:11:13 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:59.202 20:11:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:59.202 20:11:13 -- setup/common.sh@80 -- # echo 4294967296 00:03:59.202 20:11:14 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:59.202 20:11:14 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.202 20:11:14 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:59.202 20:11:14 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.202 20:11:14 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:59.202 20:11:14 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:59.202 20:11:14 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:59.202 20:11:14 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:59.202 20:11:14 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:59.202 20:11:14 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:59.202 20:11:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:59.202 No valid GPT data, bailing 00:03:59.202 20:11:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:59.202 20:11:14 -- scripts/common.sh@393 -- # pt= 00:03:59.202 20:11:14 -- scripts/common.sh@394 -- # return 1 00:03:59.202 20:11:14 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:59.202 20:11:14 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:59.202 20:11:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:59.202 20:11:14 -- setup/common.sh@80 -- # echo 4294967296 00:03:59.202 20:11:14 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:59.202 20:11:14 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.202 20:11:14 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:59.202 20:11:14 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.202 20:11:14 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:59.202 20:11:14 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:59.202 20:11:14 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:59.202 20:11:14 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:59.202 20:11:14 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:59.202 20:11:14 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:59.202 20:11:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:59.202 No valid GPT data, bailing 00:03:59.460 20:11:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:59.460 20:11:14 -- scripts/common.sh@393 -- # pt= 00:03:59.460 20:11:14 -- scripts/common.sh@394 -- # return 1 00:03:59.460 20:11:14 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:59.460 20:11:14 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:59.460 20:11:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:59.460 20:11:14 -- setup/common.sh@80 -- # echo 4294967296 00:03:59.460 20:11:14 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:59.460 20:11:14 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.460 20:11:14 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:59.460 20:11:14 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.460 20:11:14 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:59.460 20:11:14 -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:59.460 20:11:14 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:59.460 20:11:14 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:59.460 20:11:14 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:59.460 20:11:14 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:03:59.460 20:11:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:03:59.460 No valid GPT data, bailing 00:03:59.460 20:11:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:59.460 20:11:14 -- scripts/common.sh@393 -- # pt= 00:03:59.460 20:11:14 -- scripts/common.sh@394 -- # return 1 00:03:59.460 20:11:14 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:59.460 20:11:14 -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:59.460 20:11:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:59.460 20:11:14 -- setup/common.sh@80 -- # echo 6343335936 00:03:59.460 20:11:14 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:03:59.460 20:11:14 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.460 20:11:14 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:59.460 20:11:14 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.460 20:11:14 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:03:59.460 20:11:14 -- setup/devices.sh@201 -- # ctrl=nvme3 00:03:59.460 20:11:14 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:59.460 20:11:14 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:59.460 20:11:14 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:03:59.460 20:11:14 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:03:59.460 20:11:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:03:59.460 No valid GPT data, bailing 00:03:59.460 20:11:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:59.460 20:11:14 -- scripts/common.sh@393 -- # pt= 00:03:59.460 20:11:14 -- scripts/common.sh@394 -- # return 1 00:03:59.460 20:11:14 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:03:59.460 20:11:14 -- setup/common.sh@76 -- # local dev=nvme3n1 00:03:59.460 20:11:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:03:59.460 20:11:14 -- setup/common.sh@80 -- # echo 5368709120 00:03:59.460 20:11:14 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:59.460 20:11:14 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.460 20:11:14 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:59.460 20:11:14 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:03:59.460 20:11:14 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:03:59.460 20:11:14 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:59.460 20:11:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.460 20:11:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.460 20:11:14 -- common/autotest_common.sh@10 -- # set +x 00:03:59.460 ************************************ 00:03:59.460 START TEST nvme_mount 00:03:59.460 ************************************ 00:03:59.460 20:11:14 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:59.460 20:11:14 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:03:59.460 20:11:14 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:03:59.460 20:11:14 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.460 20:11:14 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:59.460 20:11:14 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:03:59.460 20:11:14 -- setup/common.sh@39 -- # local disk=nvme1n1 00:03:59.460 20:11:14 -- setup/common.sh@40 -- # local part_no=1 00:03:59.460 20:11:14 -- setup/common.sh@41 -- # local size=1073741824 00:03:59.460 20:11:14 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:59.460 20:11:14 -- setup/common.sh@44 -- # parts=() 00:03:59.460 20:11:14 -- setup/common.sh@44 -- # local parts 00:03:59.460 20:11:14 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:59.460 20:11:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.460 20:11:14 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.460 20:11:14 -- setup/common.sh@46 -- # (( part++ )) 00:03:59.460 20:11:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.460 20:11:14 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:59.460 20:11:14 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:03:59.460 20:11:14 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:00.832 Creating new GPT entries in memory. 00:04:00.832 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.832 other utilities. 00:04:00.832 20:11:15 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.832 20:11:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.832 20:11:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.832 20:11:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.832 20:11:15 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:01.764 Creating new GPT entries in memory. 00:04:01.764 The operation has completed successfully. 00:04:01.764 20:11:16 -- setup/common.sh@57 -- # (( part++ )) 00:04:01.764 20:11:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.764 20:11:16 -- setup/common.sh@62 -- # wait 53741 00:04:01.764 20:11:16 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.764 20:11:16 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:01.764 20:11:16 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.764 20:11:16 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:01.764 20:11:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:01.764 20:11:16 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.764 20:11:16 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.764 20:11:16 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:01.764 20:11:16 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:01.764 20:11:16 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.764 20:11:16 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.764 20:11:16 -- setup/devices.sh@53 -- # local found=0 00:04:01.764 20:11:16 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.764 20:11:16 -- setup/devices.sh@56 -- # : 00:04:01.764 20:11:16 -- setup/devices.sh@59 -- # local pci status 00:04:01.764 20:11:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.765 20:11:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:01.765 20:11:16 -- setup/devices.sh@47 -- # setup output config 00:04:01.765 20:11:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.765 20:11:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.765 20:11:16 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:01.765 20:11:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.765 20:11:16 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:01.765 20:11:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.022 20:11:16 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.022 20:11:16 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:02.022 20:11:16 -- setup/devices.sh@63 -- # found=1 00:04:02.022 20:11:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.022 20:11:16 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.022 20:11:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.022 20:11:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.022 20:11:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.279 20:11:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.279 20:11:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.279 20:11:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.279 20:11:17 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:02.279 20:11:17 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.279 20:11:17 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.279 20:11:17 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:02.279 20:11:17 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:02.279 20:11:17 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.279 20:11:17 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.279 20:11:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:02.279 20:11:17 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:02.279 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:02.279 20:11:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:02.279 20:11:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:02.536 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:02.536 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:02.536 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:02.536 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:02.536 20:11:17 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:02.536 20:11:17 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:02.536 20:11:17 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.536 20:11:17 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:02.536 20:11:17 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:02.536 20:11:17 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.536 20:11:17 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:02.536 20:11:17 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:02.536 20:11:17 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:02.536 20:11:17 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.536 20:11:17 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:02.536 20:11:17 -- setup/devices.sh@53 -- # local found=0 00:04:02.536 20:11:17 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.536 20:11:17 -- setup/devices.sh@56 -- # : 00:04:02.536 20:11:17 -- setup/devices.sh@59 -- # local pci status 00:04:02.536 20:11:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.536 20:11:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:02.536 20:11:17 -- setup/devices.sh@47 -- # setup output config 00:04:02.536 20:11:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.536 20:11:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:02.793 20:11:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.793 20:11:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.793 20:11:17 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:02.793 20:11:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.050 20:11:17 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:03.050 20:11:17 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:03.050 20:11:17 -- setup/devices.sh@63 -- # found=1 00:04:03.050 20:11:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.050 20:11:17 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:03.050 20:11:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.050 20:11:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:03.050 20:11:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.308 20:11:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:03.308 20:11:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.308 20:11:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.308 20:11:18 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:03.308 20:11:18 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:03.308 20:11:18 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:03.308 20:11:18 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:03.308 20:11:18 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:03.308 20:11:18 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:03.308 20:11:18 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:03.308 20:11:18 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:03.308 20:11:18 -- setup/devices.sh@50 -- # local mount_point= 00:04:03.308 20:11:18 -- setup/devices.sh@51 -- # local test_file= 00:04:03.308 20:11:18 -- setup/devices.sh@53 -- # local found=0 00:04:03.308 20:11:18 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.308 20:11:18 -- setup/devices.sh@59 -- # local pci status 00:04:03.308 20:11:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.308 20:11:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:03.308 20:11:18 -- setup/devices.sh@47 -- # setup output config 00:04:03.308 20:11:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.308 20:11:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:03.308 20:11:18 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:03.309 20:11:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.566 20:11:18 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:03.566 20:11:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.823 20:11:18 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:03.823 20:11:18 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:03.823 20:11:18 -- setup/devices.sh@63 -- # found=1 00:04:03.823 20:11:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.823 20:11:18 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:03.823 20:11:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.823 20:11:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:03.823 20:11:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.823 20:11:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:03.823 20:11:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.080 20:11:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.080 20:11:18 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:04.080 20:11:18 -- setup/devices.sh@68 -- # return 0 00:04:04.080 20:11:18 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:04.080 20:11:18 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.080 20:11:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:04.080 20:11:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:04.080 20:11:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:04.080 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:04.080 00:04:04.080 real 0m4.471s 00:04:04.080 user 0m0.907s 00:04:04.080 sys 0m1.230s 00:04:04.080 20:11:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.080 20:11:18 -- common/autotest_common.sh@10 -- # set +x 00:04:04.080 ************************************ 00:04:04.080 END TEST nvme_mount 00:04:04.080 ************************************ 00:04:04.080 20:11:18 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:04.080 20:11:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.080 20:11:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.080 20:11:18 -- common/autotest_common.sh@10 -- # set +x 00:04:04.081 ************************************ 00:04:04.081 START TEST dm_mount 00:04:04.081 ************************************ 00:04:04.081 20:11:18 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:04.081 20:11:18 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:04.081 20:11:18 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:04.081 20:11:18 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:04.081 20:11:18 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:04.081 20:11:18 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:04.081 20:11:18 -- setup/common.sh@40 -- # local part_no=2 00:04:04.081 20:11:18 -- setup/common.sh@41 -- # local size=1073741824 00:04:04.081 20:11:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:04.081 20:11:18 -- setup/common.sh@44 -- # parts=() 00:04:04.081 20:11:18 -- setup/common.sh@44 -- # local parts 00:04:04.081 20:11:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:04.081 20:11:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.081 20:11:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.081 20:11:18 -- setup/common.sh@46 -- # (( part++ )) 00:04:04.081 20:11:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.081 20:11:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.081 20:11:18 -- setup/common.sh@46 -- # (( part++ )) 00:04:04.081 20:11:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.081 20:11:18 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:04.081 20:11:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:04.081 20:11:18 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:05.013 Creating new GPT entries in memory. 00:04:05.013 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:05.013 other utilities. 00:04:05.013 20:11:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:05.013 20:11:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.013 20:11:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.013 20:11:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.013 20:11:19 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:05.947 Creating new GPT entries in memory. 00:04:05.947 The operation has completed successfully. 00:04:05.947 20:11:20 -- setup/common.sh@57 -- # (( part++ )) 00:04:05.947 20:11:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.947 20:11:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.947 20:11:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.947 20:11:20 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:07.320 The operation has completed successfully. 00:04:07.320 20:11:21 -- setup/common.sh@57 -- # (( part++ )) 00:04:07.320 20:11:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.320 20:11:21 -- setup/common.sh@62 -- # wait 54359 00:04:07.320 20:11:21 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:07.320 20:11:21 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.320 20:11:21 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:07.320 20:11:21 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:07.320 20:11:21 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:07.320 20:11:21 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.320 20:11:21 -- setup/devices.sh@161 -- # break 00:04:07.320 20:11:21 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.320 20:11:21 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:07.320 20:11:21 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:07.320 20:11:21 -- setup/devices.sh@166 -- # dm=dm-0 00:04:07.320 20:11:21 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:07.320 20:11:21 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:07.320 20:11:21 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.320 20:11:21 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:07.320 20:11:21 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.320 20:11:21 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.320 20:11:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:07.320 20:11:21 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.320 20:11:21 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:07.320 20:11:21 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:07.320 20:11:21 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:07.320 20:11:21 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.320 20:11:21 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:07.320 20:11:21 -- setup/devices.sh@53 -- # local found=0 00:04:07.320 20:11:21 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.320 20:11:21 -- setup/devices.sh@56 -- # : 00:04:07.320 20:11:21 -- setup/devices.sh@59 -- # local pci status 00:04:07.320 20:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.320 20:11:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:07.320 20:11:21 -- setup/devices.sh@47 -- # setup output config 00:04:07.320 20:11:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.320 20:11:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.320 20:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.320 20:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.321 20:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.321 20:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.579 20:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.579 20:11:22 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:07.579 20:11:22 -- setup/devices.sh@63 -- # found=1 00:04:07.579 20:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.579 20:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.579 20:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.579 20:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.579 20:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.836 20:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.836 20:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.836 20:11:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.836 20:11:22 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:07.836 20:11:22 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.836 20:11:22 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.836 20:11:22 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:07.836 20:11:22 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.836 20:11:22 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:07.836 20:11:22 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:07.836 20:11:22 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:07.836 20:11:22 -- setup/devices.sh@50 -- # local mount_point= 00:04:07.836 20:11:22 -- setup/devices.sh@51 -- # local test_file= 00:04:07.836 20:11:22 -- setup/devices.sh@53 -- # local found=0 00:04:07.836 20:11:22 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:07.836 20:11:22 -- setup/devices.sh@59 -- # local pci status 00:04:07.836 20:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.836 20:11:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:07.836 20:11:22 -- setup/devices.sh@47 -- # setup output config 00:04:07.836 20:11:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.836 20:11:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.836 20:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:07.836 20:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.094 20:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:08.094 20:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.094 20:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:08.094 20:11:23 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:08.094 20:11:23 -- setup/devices.sh@63 -- # found=1 00:04:08.094 20:11:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.094 20:11:23 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:08.094 20:11:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.352 20:11:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:08.352 20:11:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.352 20:11:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:08.352 20:11:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.352 20:11:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.352 20:11:23 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:08.352 20:11:23 -- setup/devices.sh@68 -- # return 0 00:04:08.352 20:11:23 -- setup/devices.sh@187 -- # cleanup_dm 00:04:08.352 20:11:23 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:08.352 20:11:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.352 20:11:23 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:08.610 20:11:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:08.610 20:11:23 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:08.610 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.610 20:11:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:08.610 20:11:23 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:08.610 00:04:08.610 real 0m4.464s 00:04:08.610 user 0m0.591s 00:04:08.610 sys 0m0.813s 00:04:08.610 20:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.610 20:11:23 -- common/autotest_common.sh@10 -- # set +x 00:04:08.610 ************************************ 00:04:08.610 END TEST dm_mount 00:04:08.610 ************************************ 00:04:08.610 20:11:23 -- setup/devices.sh@1 -- # cleanup 00:04:08.610 20:11:23 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:08.610 20:11:23 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.610 20:11:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:08.610 20:11:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:08.610 20:11:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:08.610 20:11:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:08.868 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:08.868 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:08.868 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:08.868 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:08.868 20:11:23 -- setup/devices.sh@12 -- # cleanup_dm 00:04:08.868 20:11:23 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:08.868 20:11:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.868 20:11:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:08.868 20:11:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:08.868 20:11:23 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:08.868 20:11:23 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:08.868 00:04:08.868 real 0m10.918s 00:04:08.868 user 0m2.276s 00:04:08.868 sys 0m2.727s 00:04:08.868 20:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.868 ************************************ 00:04:08.868 END TEST devices 00:04:08.868 ************************************ 00:04:08.868 20:11:23 -- common/autotest_common.sh@10 -- # set +x 00:04:08.868 ************************************ 00:04:08.868 END TEST setup.sh 00:04:08.868 ************************************ 00:04:08.868 00:04:08.868 real 0m40.316s 00:04:08.868 user 0m7.639s 00:04:08.868 sys 0m10.856s 00:04:08.868 20:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.868 20:11:23 -- common/autotest_common.sh@10 -- # set +x 00:04:08.868 20:11:23 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:08.868 Hugepages 00:04:08.868 node hugesize free / total 00:04:08.868 node0 1048576kB 0 / 0 00:04:09.127 node0 2048kB 2048 / 2048 00:04:09.127 00:04:09.127 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.127 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:09.127 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:09.127 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:09.127 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:09.386 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:09.386 20:11:24 -- spdk/autotest.sh@141 -- # uname -s 00:04:09.386 20:11:24 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:09.386 20:11:24 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:09.386 20:11:24 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.213 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.213 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.213 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.213 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.213 20:11:25 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:11.588 20:11:26 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:11.588 20:11:26 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:11.588 20:11:26 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:11.588 20:11:26 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:11.588 20:11:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:11.588 20:11:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:11.588 20:11:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.588 20:11:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:11.588 20:11:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:11.588 20:11:26 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:11.588 20:11:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:11.588 20:11:26 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.847 Waiting for block devices as requested 00:04:11.847 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.847 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:12.107 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:12.107 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.391 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:17.391 20:11:31 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:17.391 20:11:31 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:17.391 20:11:32 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.391 20:11:32 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:17.391 20:11:32 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:17.391 20:11:32 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:17.391 20:11:32 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:17.391 20:11:32 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:17.391 20:11:32 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme2 00:04:17.391 20:11:32 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme2 ]] 00:04:17.391 20:11:32 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:17.391 20:11:32 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme2 00:04:17.391 20:11:32 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:17.391 20:11:32 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:17.391 20:11:32 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:17.391 20:11:32 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:17.391 20:11:32 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme2 00:04:17.391 20:11:32 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:17.391 20:11:32 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:17.391 20:11:32 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:17.391 20:11:32 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:17.391 20:11:32 -- common/autotest_common.sh@1542 -- # continue 00:04:17.391 20:11:32 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:17.391 20:11:32 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:17.391 20:11:32 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.391 20:11:32 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:04:17.391 20:11:32 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:17.391 20:11:32 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:17.391 20:11:32 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:17.392 20:11:32 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:17.392 20:11:32 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme3 00:04:17.392 20:11:32 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme3 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme3 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:17.392 20:11:32 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:17.392 20:11:32 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme3 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:17.392 20:11:32 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1542 -- # continue 00:04:17.392 20:11:32 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:17.392 20:11:32 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:17.392 20:11:32 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.392 20:11:32 -- common/autotest_common.sh@1487 -- # grep 0000:00:08.0/nvme/nvme 00:04:17.392 20:11:32 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:17.392 20:11:32 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:17.392 20:11:32 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:17.392 20:11:32 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:04:17.392 20:11:32 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:17.392 20:11:32 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:17.392 20:11:32 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:17.392 20:11:32 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1542 -- # continue 00:04:17.392 20:11:32 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:17.392 20:11:32 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:17.392 20:11:32 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.392 20:11:32 -- common/autotest_common.sh@1487 -- # grep 0000:00:09.0/nvme/nvme 00:04:17.392 20:11:32 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:17.392 20:11:32 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:17.392 20:11:32 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:17.392 20:11:32 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:17.392 20:11:32 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:17.392 20:11:32 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:17.392 20:11:32 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:17.392 20:11:32 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:17.392 20:11:32 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:17.392 20:11:32 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:17.392 20:11:32 -- common/autotest_common.sh@1542 -- # continue 00:04:17.392 20:11:32 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:17.392 20:11:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:17.392 20:11:32 -- common/autotest_common.sh@10 -- # set +x 00:04:17.392 20:11:32 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:17.392 20:11:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:17.392 20:11:32 -- common/autotest_common.sh@10 -- # set +x 00:04:17.392 20:11:32 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.334 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.334 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.334 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.594 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.594 20:11:33 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:18.594 20:11:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:18.594 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:04:18.594 20:11:33 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:18.594 20:11:33 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:18.594 20:11:33 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:18.594 20:11:33 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:18.594 20:11:33 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:18.594 20:11:33 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:18.594 20:11:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:18.594 20:11:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:18.594 20:11:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.594 20:11:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:18.594 20:11:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:18.594 20:11:33 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:18.594 20:11:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:18.594 20:11:33 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:18.594 20:11:33 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:18.594 20:11:33 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:18.594 20:11:33 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.594 20:11:33 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:18.594 20:11:33 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:18.594 20:11:33 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:18.594 20:11:33 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.594 20:11:33 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:18.594 20:11:33 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:18.594 20:11:33 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:18.594 20:11:33 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.594 20:11:33 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:18.594 20:11:33 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:18.594 20:11:33 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:18.594 20:11:33 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.594 20:11:33 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:18.594 20:11:33 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:18.594 20:11:33 -- common/autotest_common.sh@1578 -- # return 0 00:04:18.594 20:11:33 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:18.594 20:11:33 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:18.595 20:11:33 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:18.595 20:11:33 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:18.595 20:11:33 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:18.595 20:11:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:18.595 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:04:18.595 20:11:33 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.595 20:11:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.595 20:11:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.595 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:04:18.856 ************************************ 00:04:18.856 START TEST env 00:04:18.856 ************************************ 00:04:18.856 20:11:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.856 * Looking for test storage... 00:04:18.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:18.856 20:11:33 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.856 20:11:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.856 20:11:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.856 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:04:18.856 ************************************ 00:04:18.856 START TEST env_memory 00:04:18.856 ************************************ 00:04:18.856 20:11:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.856 00:04:18.856 00:04:18.856 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.856 http://cunit.sourceforge.net/ 00:04:18.856 00:04:18.856 00:04:18.856 Suite: memory 00:04:18.856 Test: alloc and free memory map ...[2024-10-16 20:11:33.672627] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:18.856 passed 00:04:18.856 Test: mem map translation ...[2024-10-16 20:11:33.711513] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:18.856 [2024-10-16 20:11:33.711556] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:18.856 [2024-10-16 20:11:33.711616] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:18.856 [2024-10-16 20:11:33.711632] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:18.856 passed 00:04:18.856 Test: mem map registration ...[2024-10-16 20:11:33.780064] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:18.856 [2024-10-16 20:11:33.780108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:19.118 passed 00:04:19.118 Test: mem map adjacent registrations ...passed 00:04:19.118 00:04:19.118 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.118 suites 1 1 n/a 0 0 00:04:19.118 tests 4 4 4 0 0 00:04:19.118 asserts 152 152 152 0 n/a 00:04:19.118 00:04:19.118 Elapsed time = 0.234 seconds 00:04:19.118 00:04:19.118 real 0m0.269s 00:04:19.118 user 0m0.246s 00:04:19.118 sys 0m0.019s 00:04:19.118 20:11:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.118 ************************************ 00:04:19.118 END TEST env_memory 00:04:19.118 ************************************ 00:04:19.118 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.118 20:11:33 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:19.118 20:11:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.118 20:11:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.118 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.118 ************************************ 00:04:19.118 START TEST env_vtophys 00:04:19.118 ************************************ 00:04:19.118 20:11:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:19.118 EAL: lib.eal log level changed from notice to debug 00:04:19.118 EAL: Detected lcore 0 as core 0 on socket 0 00:04:19.118 EAL: Detected lcore 1 as core 0 on socket 0 00:04:19.118 EAL: Detected lcore 2 as core 0 on socket 0 00:04:19.118 EAL: Detected lcore 3 as core 0 on socket 0 00:04:19.118 EAL: Detected lcore 4 as core 0 on socket 0 00:04:19.118 EAL: Detected lcore 5 as core 0 on socket 0 00:04:19.118 EAL: Detected lcore 6 as core 0 on socket 0 00:04:19.118 EAL: Detected lcore 7 as core 0 on socket 0 00:04:19.118 EAL: Detected lcore 8 as core 0 on socket 0 00:04:19.118 EAL: Detected lcore 9 as core 0 on socket 0 00:04:19.118 EAL: Maximum logical cores by configuration: 128 00:04:19.118 EAL: Detected CPU lcores: 10 00:04:19.118 EAL: Detected NUMA nodes: 1 00:04:19.118 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:19.118 EAL: Detected shared linkage of DPDK 00:04:19.118 EAL: No shared files mode enabled, IPC will be disabled 00:04:19.118 EAL: Selected IOVA mode 'PA' 00:04:19.118 EAL: Probing VFIO support... 00:04:19.118 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:19.118 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:19.118 EAL: Ask a virtual area of 0x2e000 bytes 00:04:19.118 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:19.118 EAL: Setting up physically contiguous memory... 00:04:19.118 EAL: Setting maximum number of open files to 524288 00:04:19.118 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:19.118 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:19.118 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.118 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:19.118 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.118 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.118 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:19.118 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:19.118 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.118 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:19.118 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.118 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.118 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:19.118 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:19.118 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.118 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:19.118 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.118 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.118 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:19.118 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:19.118 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.118 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:19.118 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.118 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.118 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:19.118 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:19.118 EAL: Hugepages will be freed exactly as allocated. 00:04:19.118 EAL: No shared files mode enabled, IPC is disabled 00:04:19.118 EAL: No shared files mode enabled, IPC is disabled 00:04:19.422 EAL: TSC frequency is ~2600000 KHz 00:04:19.422 EAL: Main lcore 0 is ready (tid=7f34096d2a40;cpuset=[0]) 00:04:19.422 EAL: Trying to obtain current memory policy. 00:04:19.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.422 EAL: Restoring previous memory policy: 0 00:04:19.422 EAL: request: mp_malloc_sync 00:04:19.422 EAL: No shared files mode enabled, IPC is disabled 00:04:19.422 EAL: Heap on socket 0 was expanded by 2MB 00:04:19.422 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:19.422 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:19.422 EAL: Mem event callback 'spdk:(nil)' registered 00:04:19.422 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:19.422 00:04:19.422 00:04:19.422 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.422 http://cunit.sourceforge.net/ 00:04:19.422 00:04:19.422 00:04:19.422 Suite: components_suite 00:04:19.688 Test: vtophys_malloc_test ...passed 00:04:19.688 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:19.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.688 EAL: Restoring previous memory policy: 4 00:04:19.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.688 EAL: request: mp_malloc_sync 00:04:19.688 EAL: No shared files mode enabled, IPC is disabled 00:04:19.688 EAL: Heap on socket 0 was expanded by 4MB 00:04:19.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.688 EAL: request: mp_malloc_sync 00:04:19.688 EAL: No shared files mode enabled, IPC is disabled 00:04:19.688 EAL: Heap on socket 0 was shrunk by 4MB 00:04:19.688 EAL: Trying to obtain current memory policy. 00:04:19.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.688 EAL: Restoring previous memory policy: 4 00:04:19.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.688 EAL: request: mp_malloc_sync 00:04:19.688 EAL: No shared files mode enabled, IPC is disabled 00:04:19.688 EAL: Heap on socket 0 was expanded by 6MB 00:04:19.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.688 EAL: request: mp_malloc_sync 00:04:19.688 EAL: No shared files mode enabled, IPC is disabled 00:04:19.688 EAL: Heap on socket 0 was shrunk by 6MB 00:04:19.688 EAL: Trying to obtain current memory policy. 00:04:19.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.688 EAL: Restoring previous memory policy: 4 00:04:19.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.688 EAL: request: mp_malloc_sync 00:04:19.688 EAL: No shared files mode enabled, IPC is disabled 00:04:19.688 EAL: Heap on socket 0 was expanded by 10MB 00:04:19.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.688 EAL: request: mp_malloc_sync 00:04:19.688 EAL: No shared files mode enabled, IPC is disabled 00:04:19.688 EAL: Heap on socket 0 was shrunk by 10MB 00:04:19.688 EAL: Trying to obtain current memory policy. 00:04:19.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.688 EAL: Restoring previous memory policy: 4 00:04:19.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.688 EAL: request: mp_malloc_sync 00:04:19.688 EAL: No shared files mode enabled, IPC is disabled 00:04:19.688 EAL: Heap on socket 0 was expanded by 18MB 00:04:19.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.688 EAL: request: mp_malloc_sync 00:04:19.688 EAL: No shared files mode enabled, IPC is disabled 00:04:19.688 EAL: Heap on socket 0 was shrunk by 18MB 00:04:19.688 EAL: Trying to obtain current memory policy. 00:04:19.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.688 EAL: Restoring previous memory policy: 4 00:04:19.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.688 EAL: request: mp_malloc_sync 00:04:19.688 EAL: No shared files mode enabled, IPC is disabled 00:04:19.688 EAL: Heap on socket 0 was expanded by 34MB 00:04:19.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.949 EAL: request: mp_malloc_sync 00:04:19.949 EAL: No shared files mode enabled, IPC is disabled 00:04:19.949 EAL: Heap on socket 0 was shrunk by 34MB 00:04:19.949 EAL: Trying to obtain current memory policy. 00:04:19.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.949 EAL: Restoring previous memory policy: 4 00:04:19.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.949 EAL: request: mp_malloc_sync 00:04:19.949 EAL: No shared files mode enabled, IPC is disabled 00:04:19.949 EAL: Heap on socket 0 was expanded by 66MB 00:04:19.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.949 EAL: request: mp_malloc_sync 00:04:19.949 EAL: No shared files mode enabled, IPC is disabled 00:04:19.949 EAL: Heap on socket 0 was shrunk by 66MB 00:04:19.949 EAL: Trying to obtain current memory policy. 00:04:19.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.949 EAL: Restoring previous memory policy: 4 00:04:19.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.949 EAL: request: mp_malloc_sync 00:04:19.949 EAL: No shared files mode enabled, IPC is disabled 00:04:19.949 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.210 EAL: request: mp_malloc_sync 00:04:20.210 EAL: No shared files mode enabled, IPC is disabled 00:04:20.210 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.471 EAL: Trying to obtain current memory policy. 00:04:20.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.471 EAL: Restoring previous memory policy: 4 00:04:20.471 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.471 EAL: request: mp_malloc_sync 00:04:20.471 EAL: No shared files mode enabled, IPC is disabled 00:04:20.471 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.730 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.730 EAL: request: mp_malloc_sync 00:04:20.730 EAL: No shared files mode enabled, IPC is disabled 00:04:20.730 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.988 EAL: Trying to obtain current memory policy. 00:04:20.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.246 EAL: Restoring previous memory policy: 4 00:04:21.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.246 EAL: request: mp_malloc_sync 00:04:21.246 EAL: No shared files mode enabled, IPC is disabled 00:04:21.246 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.813 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.813 EAL: request: mp_malloc_sync 00:04:21.813 EAL: No shared files mode enabled, IPC is disabled 00:04:21.813 EAL: Heap on socket 0 was shrunk by 514MB 00:04:22.381 EAL: Trying to obtain current memory policy. 00:04:22.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.381 EAL: Restoring previous memory policy: 4 00:04:22.381 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.381 EAL: request: mp_malloc_sync 00:04:22.381 EAL: No shared files mode enabled, IPC is disabled 00:04:22.381 EAL: Heap on socket 0 was expanded by 1026MB 00:04:23.757 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.757 EAL: request: mp_malloc_sync 00:04:23.757 EAL: No shared files mode enabled, IPC is disabled 00:04:23.757 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:24.697 passed 00:04:24.697 00:04:24.697 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.697 suites 1 1 n/a 0 0 00:04:24.697 tests 2 2 2 0 0 00:04:24.697 asserts 5306 5306 5306 0 n/a 00:04:24.697 00:04:24.697 Elapsed time = 5.116 seconds 00:04:24.697 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.697 EAL: request: mp_malloc_sync 00:04:24.697 EAL: No shared files mode enabled, IPC is disabled 00:04:24.697 EAL: Heap on socket 0 was shrunk by 2MB 00:04:24.697 EAL: No shared files mode enabled, IPC is disabled 00:04:24.697 EAL: No shared files mode enabled, IPC is disabled 00:04:24.697 EAL: No shared files mode enabled, IPC is disabled 00:04:24.697 00:04:24.697 real 0m5.373s 00:04:24.697 user 0m4.456s 00:04:24.697 sys 0m0.764s 00:04:24.697 20:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.698 20:11:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.698 ************************************ 00:04:24.698 END TEST env_vtophys 00:04:24.698 ************************************ 00:04:24.698 20:11:39 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.698 20:11:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.698 20:11:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.698 20:11:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.698 ************************************ 00:04:24.698 START TEST env_pci 00:04:24.698 ************************************ 00:04:24.698 20:11:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.698 00:04:24.698 00:04:24.698 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.698 http://cunit.sourceforge.net/ 00:04:24.698 00:04:24.698 00:04:24.698 Suite: pci 00:04:24.698 Test: pci_hook ...[2024-10-16 20:11:39.406185] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56058 has claimed it 00:04:24.698 EAL: Cannot find device (10000:00:01.0) 00:04:24.698 EAL: Failed to attach device on primary process 00:04:24.698 passed 00:04:24.698 00:04:24.698 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.698 suites 1 1 n/a 0 0 00:04:24.698 tests 1 1 1 0 0 00:04:24.698 asserts 25 25 25 0 n/a 00:04:24.698 00:04:24.698 Elapsed time = 0.006 seconds 00:04:24.698 ************************************ 00:04:24.698 END TEST env_pci 00:04:24.698 ************************************ 00:04:24.698 00:04:24.698 real 0m0.065s 00:04:24.698 user 0m0.031s 00:04:24.698 sys 0m0.032s 00:04:24.698 20:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.698 20:11:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.698 20:11:39 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:24.698 20:11:39 -- env/env.sh@15 -- # uname 00:04:24.698 20:11:39 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:24.698 20:11:39 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:24.698 20:11:39 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.698 20:11:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:24.698 20:11:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.698 20:11:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.698 ************************************ 00:04:24.698 START TEST env_dpdk_post_init 00:04:24.698 ************************************ 00:04:24.698 20:11:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.698 EAL: Detected CPU lcores: 10 00:04:24.698 EAL: Detected NUMA nodes: 1 00:04:24.698 EAL: Detected shared linkage of DPDK 00:04:24.698 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.698 EAL: Selected IOVA mode 'PA' 00:04:24.958 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.958 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:24.958 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:24.958 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:24.958 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:24.958 Starting DPDK initialization... 00:04:24.958 Starting SPDK post initialization... 00:04:24.958 SPDK NVMe probe 00:04:24.958 Attaching to 0000:00:06.0 00:04:24.958 Attaching to 0000:00:07.0 00:04:24.958 Attaching to 0000:00:08.0 00:04:24.958 Attaching to 0000:00:09.0 00:04:24.958 Attached to 0000:00:06.0 00:04:24.958 Attached to 0000:00:07.0 00:04:24.958 Attached to 0000:00:09.0 00:04:24.958 Attached to 0000:00:08.0 00:04:24.958 Cleaning up... 00:04:24.958 ************************************ 00:04:24.958 END TEST env_dpdk_post_init 00:04:24.958 ************************************ 00:04:24.958 00:04:24.958 real 0m0.227s 00:04:24.958 user 0m0.057s 00:04:24.958 sys 0m0.071s 00:04:24.958 20:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.958 20:11:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.958 20:11:39 -- env/env.sh@26 -- # uname 00:04:24.958 20:11:39 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.958 20:11:39 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.958 20:11:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.958 20:11:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.958 20:11:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.958 ************************************ 00:04:24.958 START TEST env_mem_callbacks 00:04:24.958 ************************************ 00:04:24.958 20:11:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.958 EAL: Detected CPU lcores: 10 00:04:24.958 EAL: Detected NUMA nodes: 1 00:04:24.958 EAL: Detected shared linkage of DPDK 00:04:24.958 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.958 EAL: Selected IOVA mode 'PA' 00:04:25.301 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.301 00:04:25.301 00:04:25.301 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.301 http://cunit.sourceforge.net/ 00:04:25.301 00:04:25.301 00:04:25.301 Suite: memory 00:04:25.301 Test: test ... 00:04:25.301 register 0x200000200000 2097152 00:04:25.301 malloc 3145728 00:04:25.301 register 0x200000400000 4194304 00:04:25.301 buf 0x2000004fffc0 len 3145728 PASSED 00:04:25.301 malloc 64 00:04:25.301 buf 0x2000004ffec0 len 64 PASSED 00:04:25.301 malloc 4194304 00:04:25.301 register 0x200000800000 6291456 00:04:25.301 buf 0x2000009fffc0 len 4194304 PASSED 00:04:25.301 free 0x2000004fffc0 3145728 00:04:25.301 free 0x2000004ffec0 64 00:04:25.301 unregister 0x200000400000 4194304 PASSED 00:04:25.301 free 0x2000009fffc0 4194304 00:04:25.301 unregister 0x200000800000 6291456 PASSED 00:04:25.301 malloc 8388608 00:04:25.301 register 0x200000400000 10485760 00:04:25.301 buf 0x2000005fffc0 len 8388608 PASSED 00:04:25.301 free 0x2000005fffc0 8388608 00:04:25.301 unregister 0x200000400000 10485760 PASSED 00:04:25.301 passed 00:04:25.301 00:04:25.301 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.301 suites 1 1 n/a 0 0 00:04:25.301 tests 1 1 1 0 0 00:04:25.301 asserts 15 15 15 0 n/a 00:04:25.301 00:04:25.301 Elapsed time = 0.047 seconds 00:04:25.301 00:04:25.301 real 0m0.218s 00:04:25.301 user 0m0.066s 00:04:25.301 sys 0m0.049s 00:04:25.301 ************************************ 00:04:25.301 END TEST env_mem_callbacks 00:04:25.301 ************************************ 00:04:25.301 20:11:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.301 20:11:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.301 ************************************ 00:04:25.301 END TEST env 00:04:25.301 ************************************ 00:04:25.301 00:04:25.301 real 0m6.513s 00:04:25.301 user 0m4.957s 00:04:25.301 sys 0m1.107s 00:04:25.301 20:11:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.301 20:11:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.301 20:11:40 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.301 20:11:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.301 20:11:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.301 20:11:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.301 ************************************ 00:04:25.301 START TEST rpc 00:04:25.301 ************************************ 00:04:25.301 20:11:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.301 * Looking for test storage... 00:04:25.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.301 20:11:40 -- rpc/rpc.sh@65 -- # spdk_pid=56176 00:04:25.301 20:11:40 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.301 20:11:40 -- rpc/rpc.sh@67 -- # waitforlisten 56176 00:04:25.301 20:11:40 -- common/autotest_common.sh@819 -- # '[' -z 56176 ']' 00:04:25.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.301 20:11:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.301 20:11:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:25.301 20:11:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.301 20:11:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:25.301 20:11:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.301 20:11:40 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:25.562 [2024-10-16 20:11:40.262015] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:25.562 [2024-10-16 20:11:40.262172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56176 ] 00:04:25.562 [2024-10-16 20:11:40.415961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.821 [2024-10-16 20:11:40.666818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:25.821 [2024-10-16 20:11:40.667068] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:25.822 [2024-10-16 20:11:40.667087] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56176' to capture a snapshot of events at runtime. 00:04:25.822 [2024-10-16 20:11:40.667098] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56176 for offline analysis/debug. 00:04:25.822 [2024-10-16 20:11:40.667133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.204 20:11:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:27.204 20:11:41 -- common/autotest_common.sh@852 -- # return 0 00:04:27.204 20:11:41 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.204 20:11:41 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.204 20:11:41 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.204 20:11:41 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.204 20:11:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.204 20:11:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.204 20:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.204 ************************************ 00:04:27.204 START TEST rpc_integrity 00:04:27.204 ************************************ 00:04:27.204 20:11:41 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:27.204 20:11:41 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.204 20:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.204 20:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.204 20:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.204 20:11:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.204 20:11:41 -- rpc/rpc.sh@13 -- # jq length 00:04:27.204 20:11:41 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.204 20:11:41 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.204 20:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.204 20:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.204 20:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.204 20:11:41 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.204 20:11:41 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.204 20:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.204 20:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.204 20:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.204 20:11:41 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.204 { 00:04:27.204 "name": "Malloc0", 00:04:27.204 "aliases": [ 00:04:27.204 "6348d606-3749-46ab-8b4d-c2b6babef5a0" 00:04:27.204 ], 00:04:27.204 "product_name": "Malloc disk", 00:04:27.204 "block_size": 512, 00:04:27.204 "num_blocks": 16384, 00:04:27.204 "uuid": "6348d606-3749-46ab-8b4d-c2b6babef5a0", 00:04:27.204 "assigned_rate_limits": { 00:04:27.204 "rw_ios_per_sec": 0, 00:04:27.204 "rw_mbytes_per_sec": 0, 00:04:27.204 "r_mbytes_per_sec": 0, 00:04:27.204 "w_mbytes_per_sec": 0 00:04:27.204 }, 00:04:27.204 "claimed": false, 00:04:27.204 "zoned": false, 00:04:27.204 "supported_io_types": { 00:04:27.204 "read": true, 00:04:27.204 "write": true, 00:04:27.204 "unmap": true, 00:04:27.204 "write_zeroes": true, 00:04:27.204 "flush": true, 00:04:27.204 "reset": true, 00:04:27.204 "compare": false, 00:04:27.204 "compare_and_write": false, 00:04:27.204 "abort": true, 00:04:27.204 "nvme_admin": false, 00:04:27.204 "nvme_io": false 00:04:27.204 }, 00:04:27.204 "memory_domains": [ 00:04:27.204 { 00:04:27.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.204 "dma_device_type": 2 00:04:27.204 } 00:04:27.204 ], 00:04:27.204 "driver_specific": {} 00:04:27.204 } 00:04:27.205 ]' 00:04:27.205 20:11:41 -- rpc/rpc.sh@17 -- # jq length 00:04:27.205 20:11:41 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.205 20:11:41 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.205 20:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.205 20:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.205 [2024-10-16 20:11:41.917011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.205 [2024-10-16 20:11:41.917117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.205 [2024-10-16 20:11:41.917145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:27.205 [2024-10-16 20:11:41.917158] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.205 [2024-10-16 20:11:41.919621] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.205 [2024-10-16 20:11:41.919675] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.205 Passthru0 00:04:27.205 20:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.205 20:11:41 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.205 20:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.205 20:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.205 20:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.205 20:11:41 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.205 { 00:04:27.205 "name": "Malloc0", 00:04:27.205 "aliases": [ 00:04:27.205 "6348d606-3749-46ab-8b4d-c2b6babef5a0" 00:04:27.205 ], 00:04:27.205 "product_name": "Malloc disk", 00:04:27.205 "block_size": 512, 00:04:27.205 "num_blocks": 16384, 00:04:27.205 "uuid": "6348d606-3749-46ab-8b4d-c2b6babef5a0", 00:04:27.205 "assigned_rate_limits": { 00:04:27.205 "rw_ios_per_sec": 0, 00:04:27.205 "rw_mbytes_per_sec": 0, 00:04:27.205 "r_mbytes_per_sec": 0, 00:04:27.205 "w_mbytes_per_sec": 0 00:04:27.205 }, 00:04:27.205 "claimed": true, 00:04:27.205 "claim_type": "exclusive_write", 00:04:27.205 "zoned": false, 00:04:27.205 "supported_io_types": { 00:04:27.205 "read": true, 00:04:27.205 "write": true, 00:04:27.205 "unmap": true, 00:04:27.205 "write_zeroes": true, 00:04:27.205 "flush": true, 00:04:27.205 "reset": true, 00:04:27.205 "compare": false, 00:04:27.205 "compare_and_write": false, 00:04:27.205 "abort": true, 00:04:27.205 "nvme_admin": false, 00:04:27.205 "nvme_io": false 00:04:27.205 }, 00:04:27.205 "memory_domains": [ 00:04:27.205 { 00:04:27.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.205 "dma_device_type": 2 00:04:27.205 } 00:04:27.205 ], 00:04:27.205 "driver_specific": {} 00:04:27.205 }, 00:04:27.205 { 00:04:27.205 "name": "Passthru0", 00:04:27.205 "aliases": [ 00:04:27.205 "4867be99-a325-5006-b9e0-1257fe44b749" 00:04:27.205 ], 00:04:27.205 "product_name": "passthru", 00:04:27.205 "block_size": 512, 00:04:27.205 "num_blocks": 16384, 00:04:27.205 "uuid": "4867be99-a325-5006-b9e0-1257fe44b749", 00:04:27.205 "assigned_rate_limits": { 00:04:27.205 "rw_ios_per_sec": 0, 00:04:27.205 "rw_mbytes_per_sec": 0, 00:04:27.205 "r_mbytes_per_sec": 0, 00:04:27.205 "w_mbytes_per_sec": 0 00:04:27.205 }, 00:04:27.205 "claimed": false, 00:04:27.205 "zoned": false, 00:04:27.205 "supported_io_types": { 00:04:27.205 "read": true, 00:04:27.205 "write": true, 00:04:27.205 "unmap": true, 00:04:27.205 "write_zeroes": true, 00:04:27.205 "flush": true, 00:04:27.205 "reset": true, 00:04:27.205 "compare": false, 00:04:27.205 "compare_and_write": false, 00:04:27.205 "abort": true, 00:04:27.205 "nvme_admin": false, 00:04:27.205 "nvme_io": false 00:04:27.205 }, 00:04:27.205 "memory_domains": [ 00:04:27.205 { 00:04:27.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.205 "dma_device_type": 2 00:04:27.205 } 00:04:27.205 ], 00:04:27.205 "driver_specific": { 00:04:27.205 "passthru": { 00:04:27.205 "name": "Passthru0", 00:04:27.205 "base_bdev_name": "Malloc0" 00:04:27.205 } 00:04:27.205 } 00:04:27.205 } 00:04:27.205 ]' 00:04:27.205 20:11:41 -- rpc/rpc.sh@21 -- # jq length 00:04:27.205 20:11:41 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.205 20:11:41 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.205 20:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.205 20:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.205 20:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.205 20:11:41 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:27.205 20:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.205 20:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.205 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.205 20:11:42 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.205 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.205 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.205 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.205 20:11:42 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.205 20:11:42 -- rpc/rpc.sh@26 -- # jq length 00:04:27.205 20:11:42 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.205 00:04:27.205 real 0m0.265s 00:04:27.205 user 0m0.131s 00:04:27.205 sys 0m0.039s 00:04:27.205 20:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.205 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.205 ************************************ 00:04:27.205 END TEST rpc_integrity 00:04:27.205 ************************************ 00:04:27.205 20:11:42 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:27.205 20:11:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.205 20:11:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.205 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.205 ************************************ 00:04:27.205 START TEST rpc_plugins 00:04:27.205 ************************************ 00:04:27.205 20:11:42 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:27.205 20:11:42 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.205 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.205 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.464 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.464 20:11:42 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.464 20:11:42 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.464 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.464 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.464 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.464 20:11:42 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.464 { 00:04:27.464 "name": "Malloc1", 00:04:27.464 "aliases": [ 00:04:27.464 "d9f5fd13-dc94-46c0-913d-9c5dd04b84e2" 00:04:27.464 ], 00:04:27.464 "product_name": "Malloc disk", 00:04:27.464 "block_size": 4096, 00:04:27.464 "num_blocks": 256, 00:04:27.464 "uuid": "d9f5fd13-dc94-46c0-913d-9c5dd04b84e2", 00:04:27.464 "assigned_rate_limits": { 00:04:27.464 "rw_ios_per_sec": 0, 00:04:27.464 "rw_mbytes_per_sec": 0, 00:04:27.464 "r_mbytes_per_sec": 0, 00:04:27.464 "w_mbytes_per_sec": 0 00:04:27.464 }, 00:04:27.464 "claimed": false, 00:04:27.464 "zoned": false, 00:04:27.464 "supported_io_types": { 00:04:27.464 "read": true, 00:04:27.464 "write": true, 00:04:27.464 "unmap": true, 00:04:27.464 "write_zeroes": true, 00:04:27.464 "flush": true, 00:04:27.464 "reset": true, 00:04:27.464 "compare": false, 00:04:27.464 "compare_and_write": false, 00:04:27.464 "abort": true, 00:04:27.464 "nvme_admin": false, 00:04:27.464 "nvme_io": false 00:04:27.464 }, 00:04:27.464 "memory_domains": [ 00:04:27.464 { 00:04:27.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.464 "dma_device_type": 2 00:04:27.464 } 00:04:27.464 ], 00:04:27.464 "driver_specific": {} 00:04:27.464 } 00:04:27.464 ]' 00:04:27.464 20:11:42 -- rpc/rpc.sh@32 -- # jq length 00:04:27.464 20:11:42 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.464 20:11:42 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.464 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.464 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.464 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.464 20:11:42 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.464 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.464 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.464 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.464 20:11:42 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.464 20:11:42 -- rpc/rpc.sh@36 -- # jq length 00:04:27.464 20:11:42 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.464 00:04:27.464 real 0m0.116s 00:04:27.464 user 0m0.067s 00:04:27.464 sys 0m0.017s 00:04:27.464 20:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.464 ************************************ 00:04:27.464 END TEST rpc_plugins 00:04:27.464 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.464 ************************************ 00:04:27.464 20:11:42 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.464 20:11:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.464 20:11:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.464 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.464 ************************************ 00:04:27.464 START TEST rpc_trace_cmd_test 00:04:27.464 ************************************ 00:04:27.464 20:11:42 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:27.464 20:11:42 -- rpc/rpc.sh@40 -- # local info 00:04:27.464 20:11:42 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.464 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.464 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.464 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.464 20:11:42 -- rpc/rpc.sh@42 -- # info='{ 00:04:27.464 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56176", 00:04:27.464 "tpoint_group_mask": "0x8", 00:04:27.464 "iscsi_conn": { 00:04:27.464 "mask": "0x2", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "scsi": { 00:04:27.464 "mask": "0x4", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "bdev": { 00:04:27.464 "mask": "0x8", 00:04:27.464 "tpoint_mask": "0xffffffffffffffff" 00:04:27.464 }, 00:04:27.464 "nvmf_rdma": { 00:04:27.464 "mask": "0x10", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "nvmf_tcp": { 00:04:27.464 "mask": "0x20", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "ftl": { 00:04:27.464 "mask": "0x40", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "blobfs": { 00:04:27.464 "mask": "0x80", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "dsa": { 00:04:27.464 "mask": "0x200", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "thread": { 00:04:27.464 "mask": "0x400", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "nvme_pcie": { 00:04:27.464 "mask": "0x800", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "iaa": { 00:04:27.464 "mask": "0x1000", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "nvme_tcp": { 00:04:27.464 "mask": "0x2000", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 }, 00:04:27.464 "bdev_nvme": { 00:04:27.464 "mask": "0x4000", 00:04:27.464 "tpoint_mask": "0x0" 00:04:27.464 } 00:04:27.464 }' 00:04:27.464 20:11:42 -- rpc/rpc.sh@43 -- # jq length 00:04:27.464 20:11:42 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:27.464 20:11:42 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.464 20:11:42 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.464 20:11:42 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.726 20:11:42 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.726 20:11:42 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:27.726 20:11:42 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:27.726 20:11:42 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:27.726 ************************************ 00:04:27.726 END TEST rpc_trace_cmd_test 00:04:27.726 ************************************ 00:04:27.726 20:11:42 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:27.726 00:04:27.726 real 0m0.179s 00:04:27.726 user 0m0.144s 00:04:27.726 sys 0m0.024s 00:04:27.726 20:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.726 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.726 20:11:42 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:27.726 20:11:42 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:27.726 20:11:42 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:27.726 20:11:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.726 20:11:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.726 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.726 ************************************ 00:04:27.726 START TEST rpc_daemon_integrity 00:04:27.726 ************************************ 00:04:27.726 20:11:42 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:27.726 20:11:42 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.726 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.726 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.726 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.726 20:11:42 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.726 20:11:42 -- rpc/rpc.sh@13 -- # jq length 00:04:27.726 20:11:42 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.726 20:11:42 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.726 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.726 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.726 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.726 20:11:42 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:27.726 20:11:42 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.726 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.726 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.726 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.726 20:11:42 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.726 { 00:04:27.726 "name": "Malloc2", 00:04:27.726 "aliases": [ 00:04:27.726 "e039c768-0b26-4f71-98ee-d01ca9feea86" 00:04:27.726 ], 00:04:27.726 "product_name": "Malloc disk", 00:04:27.726 "block_size": 512, 00:04:27.726 "num_blocks": 16384, 00:04:27.726 "uuid": "e039c768-0b26-4f71-98ee-d01ca9feea86", 00:04:27.726 "assigned_rate_limits": { 00:04:27.726 "rw_ios_per_sec": 0, 00:04:27.726 "rw_mbytes_per_sec": 0, 00:04:27.726 "r_mbytes_per_sec": 0, 00:04:27.726 "w_mbytes_per_sec": 0 00:04:27.726 }, 00:04:27.726 "claimed": false, 00:04:27.726 "zoned": false, 00:04:27.726 "supported_io_types": { 00:04:27.726 "read": true, 00:04:27.726 "write": true, 00:04:27.726 "unmap": true, 00:04:27.726 "write_zeroes": true, 00:04:27.726 "flush": true, 00:04:27.726 "reset": true, 00:04:27.726 "compare": false, 00:04:27.726 "compare_and_write": false, 00:04:27.726 "abort": true, 00:04:27.726 "nvme_admin": false, 00:04:27.726 "nvme_io": false 00:04:27.726 }, 00:04:27.726 "memory_domains": [ 00:04:27.726 { 00:04:27.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.726 "dma_device_type": 2 00:04:27.726 } 00:04:27.726 ], 00:04:27.726 "driver_specific": {} 00:04:27.726 } 00:04:27.726 ]' 00:04:27.726 20:11:42 -- rpc/rpc.sh@17 -- # jq length 00:04:27.726 20:11:42 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.726 20:11:42 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:27.726 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.726 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.726 [2024-10-16 20:11:42.600771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:27.726 [2024-10-16 20:11:42.600898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.726 [2024-10-16 20:11:42.600932] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:27.726 [2024-10-16 20:11:42.601010] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.726 [2024-10-16 20:11:42.602712] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.726 [2024-10-16 20:11:42.602767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.726 Passthru0 00:04:27.726 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.726 20:11:42 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.726 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.726 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.726 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.726 20:11:42 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.726 { 00:04:27.726 "name": "Malloc2", 00:04:27.726 "aliases": [ 00:04:27.726 "e039c768-0b26-4f71-98ee-d01ca9feea86" 00:04:27.726 ], 00:04:27.726 "product_name": "Malloc disk", 00:04:27.726 "block_size": 512, 00:04:27.726 "num_blocks": 16384, 00:04:27.726 "uuid": "e039c768-0b26-4f71-98ee-d01ca9feea86", 00:04:27.726 "assigned_rate_limits": { 00:04:27.726 "rw_ios_per_sec": 0, 00:04:27.726 "rw_mbytes_per_sec": 0, 00:04:27.726 "r_mbytes_per_sec": 0, 00:04:27.726 "w_mbytes_per_sec": 0 00:04:27.726 }, 00:04:27.726 "claimed": true, 00:04:27.726 "claim_type": "exclusive_write", 00:04:27.726 "zoned": false, 00:04:27.726 "supported_io_types": { 00:04:27.726 "read": true, 00:04:27.726 "write": true, 00:04:27.726 "unmap": true, 00:04:27.726 "write_zeroes": true, 00:04:27.726 "flush": true, 00:04:27.726 "reset": true, 00:04:27.726 "compare": false, 00:04:27.726 "compare_and_write": false, 00:04:27.726 "abort": true, 00:04:27.726 "nvme_admin": false, 00:04:27.726 "nvme_io": false 00:04:27.726 }, 00:04:27.726 "memory_domains": [ 00:04:27.726 { 00:04:27.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.726 "dma_device_type": 2 00:04:27.726 } 00:04:27.726 ], 00:04:27.726 "driver_specific": {} 00:04:27.726 }, 00:04:27.726 { 00:04:27.726 "name": "Passthru0", 00:04:27.726 "aliases": [ 00:04:27.726 "cd9c7697-0276-597f-856a-9cf3a693e7db" 00:04:27.726 ], 00:04:27.726 "product_name": "passthru", 00:04:27.726 "block_size": 512, 00:04:27.726 "num_blocks": 16384, 00:04:27.726 "uuid": "cd9c7697-0276-597f-856a-9cf3a693e7db", 00:04:27.726 "assigned_rate_limits": { 00:04:27.726 "rw_ios_per_sec": 0, 00:04:27.726 "rw_mbytes_per_sec": 0, 00:04:27.726 "r_mbytes_per_sec": 0, 00:04:27.726 "w_mbytes_per_sec": 0 00:04:27.726 }, 00:04:27.726 "claimed": false, 00:04:27.726 "zoned": false, 00:04:27.726 "supported_io_types": { 00:04:27.726 "read": true, 00:04:27.726 "write": true, 00:04:27.726 "unmap": true, 00:04:27.726 "write_zeroes": true, 00:04:27.726 "flush": true, 00:04:27.726 "reset": true, 00:04:27.726 "compare": false, 00:04:27.726 "compare_and_write": false, 00:04:27.726 "abort": true, 00:04:27.726 "nvme_admin": false, 00:04:27.726 "nvme_io": false 00:04:27.726 }, 00:04:27.726 "memory_domains": [ 00:04:27.726 { 00:04:27.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.726 "dma_device_type": 2 00:04:27.726 } 00:04:27.726 ], 00:04:27.726 "driver_specific": { 00:04:27.726 "passthru": { 00:04:27.726 "name": "Passthru0", 00:04:27.726 "base_bdev_name": "Malloc2" 00:04:27.726 } 00:04:27.726 } 00:04:27.726 } 00:04:27.726 ]' 00:04:27.726 20:11:42 -- rpc/rpc.sh@21 -- # jq length 00:04:27.726 20:11:42 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.726 20:11:42 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.726 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.726 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.988 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.988 20:11:42 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:27.988 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.988 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.988 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.988 20:11:42 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.988 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:27.988 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.988 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:27.988 20:11:42 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.988 20:11:42 -- rpc/rpc.sh@26 -- # jq length 00:04:27.988 ************************************ 00:04:27.988 END TEST rpc_daemon_integrity 00:04:27.988 ************************************ 00:04:27.988 20:11:42 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.988 00:04:27.988 real 0m0.226s 00:04:27.988 user 0m0.121s 00:04:27.988 sys 0m0.030s 00:04:27.988 20:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.988 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.988 20:11:42 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:27.988 20:11:42 -- rpc/rpc.sh@84 -- # killprocess 56176 00:04:27.988 20:11:42 -- common/autotest_common.sh@926 -- # '[' -z 56176 ']' 00:04:27.988 20:11:42 -- common/autotest_common.sh@930 -- # kill -0 56176 00:04:27.988 20:11:42 -- common/autotest_common.sh@931 -- # uname 00:04:27.988 20:11:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:27.988 20:11:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56176 00:04:27.988 killing process with pid 56176 00:04:27.988 20:11:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:27.988 20:11:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:27.988 20:11:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56176' 00:04:27.988 20:11:42 -- common/autotest_common.sh@945 -- # kill 56176 00:04:27.988 20:11:42 -- common/autotest_common.sh@950 -- # wait 56176 00:04:29.373 ************************************ 00:04:29.373 END TEST rpc 00:04:29.373 00:04:29.373 real 0m3.807s 00:04:29.373 user 0m4.353s 00:04:29.373 sys 0m0.653s 00:04:29.373 20:11:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.373 20:11:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.373 ************************************ 00:04:29.373 20:11:43 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.373 20:11:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.373 20:11:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.373 20:11:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.373 ************************************ 00:04:29.373 START TEST rpc_client 00:04:29.373 ************************************ 00:04:29.373 20:11:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.373 * Looking for test storage... 00:04:29.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:29.374 20:11:44 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:29.374 OK 00:04:29.374 20:11:44 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:29.374 00:04:29.374 real 0m0.107s 00:04:29.374 user 0m0.045s 00:04:29.374 sys 0m0.067s 00:04:29.374 ************************************ 00:04:29.374 END TEST rpc_client 00:04:29.374 ************************************ 00:04:29.374 20:11:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.374 20:11:44 -- common/autotest_common.sh@10 -- # set +x 00:04:29.374 20:11:44 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:29.374 20:11:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.374 20:11:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.374 20:11:44 -- common/autotest_common.sh@10 -- # set +x 00:04:29.374 ************************************ 00:04:29.374 START TEST json_config 00:04:29.374 ************************************ 00:04:29.374 20:11:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:29.374 20:11:44 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:29.374 20:11:44 -- nvmf/common.sh@7 -- # uname -s 00:04:29.374 20:11:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.374 20:11:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.374 20:11:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.374 20:11:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.374 20:11:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.374 20:11:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.374 20:11:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.374 20:11:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.374 20:11:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.374 20:11:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.374 20:11:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b92e1fe-a063-41e2-8301-3ad34ae218a8 00:04:29.374 20:11:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=7b92e1fe-a063-41e2-8301-3ad34ae218a8 00:04:29.374 20:11:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.374 20:11:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.374 20:11:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:29.374 20:11:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:29.374 20:11:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.374 20:11:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.374 20:11:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.374 20:11:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.374 20:11:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.374 20:11:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.374 20:11:44 -- paths/export.sh@5 -- # export PATH 00:04:29.374 20:11:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.374 20:11:44 -- nvmf/common.sh@46 -- # : 0 00:04:29.374 20:11:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:29.374 20:11:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:29.374 20:11:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:29.374 20:11:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.374 20:11:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.374 20:11:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:29.374 20:11:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:29.374 20:11:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:29.374 20:11:44 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:29.374 20:11:44 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:29.374 20:11:44 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:29.374 20:11:44 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:29.374 20:11:44 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:29.374 WARNING: No tests are enabled so not running JSON configuration tests 00:04:29.374 20:11:44 -- json_config/json_config.sh@27 -- # exit 0 00:04:29.374 00:04:29.374 real 0m0.056s 00:04:29.374 user 0m0.023s 00:04:29.374 sys 0m0.032s 00:04:29.374 20:11:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.374 20:11:44 -- common/autotest_common.sh@10 -- # set +x 00:04:29.374 ************************************ 00:04:29.374 END TEST json_config 00:04:29.374 ************************************ 00:04:29.374 20:11:44 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:29.374 20:11:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.374 20:11:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.374 20:11:44 -- common/autotest_common.sh@10 -- # set +x 00:04:29.374 ************************************ 00:04:29.374 START TEST json_config_extra_key 00:04:29.374 ************************************ 00:04:29.374 20:11:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:29.374 20:11:44 -- nvmf/common.sh@7 -- # uname -s 00:04:29.374 20:11:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.374 20:11:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.374 20:11:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.374 20:11:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.374 20:11:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.374 20:11:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.374 20:11:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.374 20:11:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.374 20:11:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.374 20:11:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.374 20:11:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b92e1fe-a063-41e2-8301-3ad34ae218a8 00:04:29.374 20:11:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=7b92e1fe-a063-41e2-8301-3ad34ae218a8 00:04:29.374 20:11:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.374 20:11:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.374 20:11:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:29.374 20:11:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:29.374 20:11:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.374 20:11:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.374 20:11:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.374 20:11:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.374 20:11:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.374 20:11:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.374 20:11:44 -- paths/export.sh@5 -- # export PATH 00:04:29.374 20:11:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.374 20:11:44 -- nvmf/common.sh@46 -- # : 0 00:04:29.374 20:11:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:29.374 20:11:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:29.374 20:11:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:29.374 20:11:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.374 20:11:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.374 20:11:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:29.374 20:11:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:29.374 20:11:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:29.374 INFO: launching applications... 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:29.374 20:11:44 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:29.375 20:11:44 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56459 00:04:29.375 20:11:44 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:29.375 Waiting for target to run... 00:04:29.375 20:11:44 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56459 /var/tmp/spdk_tgt.sock 00:04:29.375 20:11:44 -- common/autotest_common.sh@819 -- # '[' -z 56459 ']' 00:04:29.375 20:11:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:29.375 20:11:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:29.375 20:11:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:29.375 20:11:44 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:29.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:29.375 20:11:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:29.375 20:11:44 -- common/autotest_common.sh@10 -- # set +x 00:04:29.636 [2024-10-16 20:11:44.377075] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:29.636 [2024-10-16 20:11:44.377889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56459 ] 00:04:29.896 [2024-10-16 20:11:44.808680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.154 [2024-10-16 20:11:44.958428] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:30.154 [2024-10-16 20:11:44.958732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.110 20:11:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:31.110 00:04:31.110 INFO: shutting down applications... 00:04:31.110 20:11:45 -- common/autotest_common.sh@852 -- # return 0 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56459 ]] 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56459 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56459 00:04:31.110 20:11:45 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:31.674 20:11:46 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:31.674 20:11:46 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:31.674 20:11:46 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56459 00:04:31.674 20:11:46 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:32.240 20:11:46 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:32.240 20:11:46 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:32.240 20:11:46 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56459 00:04:32.240 20:11:46 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:32.506 20:11:47 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:32.506 20:11:47 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:32.506 SPDK target shutdown done 00:04:32.506 20:11:47 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56459 00:04:32.506 20:11:47 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:32.506 20:11:47 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:32.506 20:11:47 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:32.506 20:11:47 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:32.506 Success 00:04:32.506 20:11:47 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:32.506 ************************************ 00:04:32.506 END TEST json_config_extra_key 00:04:32.506 ************************************ 00:04:32.506 00:04:32.506 real 0m3.163s 00:04:32.506 user 0m2.948s 00:04:32.506 sys 0m0.496s 00:04:32.506 20:11:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.506 20:11:47 -- common/autotest_common.sh@10 -- # set +x 00:04:32.506 20:11:47 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:32.506 20:11:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.506 20:11:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.506 20:11:47 -- common/autotest_common.sh@10 -- # set +x 00:04:32.506 ************************************ 00:04:32.506 START TEST alias_rpc 00:04:32.506 ************************************ 00:04:32.506 20:11:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:32.763 * Looking for test storage... 00:04:32.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:32.763 20:11:47 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:32.763 20:11:47 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56544 00:04:32.763 20:11:47 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56544 00:04:32.763 20:11:47 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.763 20:11:47 -- common/autotest_common.sh@819 -- # '[' -z 56544 ']' 00:04:32.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.763 20:11:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.763 20:11:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:32.763 20:11:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.763 20:11:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:32.763 20:11:47 -- common/autotest_common.sh@10 -- # set +x 00:04:32.763 [2024-10-16 20:11:47.573404] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:32.763 [2024-10-16 20:11:47.573693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56544 ] 00:04:33.020 [2024-10-16 20:11:47.720928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.020 [2024-10-16 20:11:47.871266] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:33.020 [2024-10-16 20:11:47.871543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.585 20:11:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:33.585 20:11:48 -- common/autotest_common.sh@852 -- # return 0 00:04:33.585 20:11:48 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:33.843 20:11:48 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56544 00:04:33.843 20:11:48 -- common/autotest_common.sh@926 -- # '[' -z 56544 ']' 00:04:33.843 20:11:48 -- common/autotest_common.sh@930 -- # kill -0 56544 00:04:33.843 20:11:48 -- common/autotest_common.sh@931 -- # uname 00:04:33.843 20:11:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:33.843 20:11:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56544 00:04:33.843 killing process with pid 56544 00:04:33.843 20:11:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:33.843 20:11:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:33.843 20:11:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56544' 00:04:33.843 20:11:48 -- common/autotest_common.sh@945 -- # kill 56544 00:04:33.843 20:11:48 -- common/autotest_common.sh@950 -- # wait 56544 00:04:35.223 ************************************ 00:04:35.223 END TEST alias_rpc 00:04:35.223 ************************************ 00:04:35.223 00:04:35.223 real 0m2.584s 00:04:35.223 user 0m2.688s 00:04:35.223 sys 0m0.356s 00:04:35.223 20:11:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.223 20:11:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.223 20:11:50 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:35.223 20:11:50 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:35.223 20:11:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.223 20:11:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.223 20:11:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.223 ************************************ 00:04:35.223 START TEST spdkcli_tcp 00:04:35.223 ************************************ 00:04:35.223 20:11:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:35.224 * Looking for test storage... 00:04:35.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:35.224 20:11:50 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:35.224 20:11:50 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:35.224 20:11:50 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:35.224 20:11:50 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:35.224 20:11:50 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:35.224 20:11:50 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:35.224 20:11:50 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:35.224 20:11:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:35.224 20:11:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.224 20:11:50 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56631 00:04:35.224 20:11:50 -- spdkcli/tcp.sh@27 -- # waitforlisten 56631 00:04:35.224 20:11:50 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:35.224 20:11:50 -- common/autotest_common.sh@819 -- # '[' -z 56631 ']' 00:04:35.224 20:11:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.224 20:11:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:35.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.224 20:11:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.224 20:11:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:35.224 20:11:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.483 [2024-10-16 20:11:50.208665] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:35.483 [2024-10-16 20:11:50.208784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56631 ] 00:04:35.483 [2024-10-16 20:11:50.358807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.745 [2024-10-16 20:11:50.557029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:35.745 [2024-10-16 20:11:50.557353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.745 [2024-10-16 20:11:50.557416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.154 20:11:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:37.154 20:11:51 -- common/autotest_common.sh@852 -- # return 0 00:04:37.154 20:11:51 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:37.154 20:11:51 -- spdkcli/tcp.sh@31 -- # socat_pid=56650 00:04:37.154 20:11:51 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:37.154 [ 00:04:37.154 "bdev_malloc_delete", 00:04:37.154 "bdev_malloc_create", 00:04:37.154 "bdev_null_resize", 00:04:37.154 "bdev_null_delete", 00:04:37.154 "bdev_null_create", 00:04:37.154 "bdev_nvme_cuse_unregister", 00:04:37.154 "bdev_nvme_cuse_register", 00:04:37.154 "bdev_opal_new_user", 00:04:37.154 "bdev_opal_set_lock_state", 00:04:37.154 "bdev_opal_delete", 00:04:37.154 "bdev_opal_get_info", 00:04:37.154 "bdev_opal_create", 00:04:37.154 "bdev_nvme_opal_revert", 00:04:37.154 "bdev_nvme_opal_init", 00:04:37.154 "bdev_nvme_send_cmd", 00:04:37.154 "bdev_nvme_get_path_iostat", 00:04:37.154 "bdev_nvme_get_mdns_discovery_info", 00:04:37.154 "bdev_nvme_stop_mdns_discovery", 00:04:37.154 "bdev_nvme_start_mdns_discovery", 00:04:37.154 "bdev_nvme_set_multipath_policy", 00:04:37.154 "bdev_nvme_set_preferred_path", 00:04:37.154 "bdev_nvme_get_io_paths", 00:04:37.154 "bdev_nvme_remove_error_injection", 00:04:37.154 "bdev_nvme_add_error_injection", 00:04:37.154 "bdev_nvme_get_discovery_info", 00:04:37.154 "bdev_nvme_stop_discovery", 00:04:37.154 "bdev_nvme_start_discovery", 00:04:37.154 "bdev_nvme_get_controller_health_info", 00:04:37.154 "bdev_nvme_disable_controller", 00:04:37.154 "bdev_nvme_enable_controller", 00:04:37.154 "bdev_nvme_reset_controller", 00:04:37.154 "bdev_nvme_get_transport_statistics", 00:04:37.154 "bdev_nvme_apply_firmware", 00:04:37.154 "bdev_nvme_detach_controller", 00:04:37.154 "bdev_nvme_get_controllers", 00:04:37.154 "bdev_nvme_attach_controller", 00:04:37.154 "bdev_nvme_set_hotplug", 00:04:37.154 "bdev_nvme_set_options", 00:04:37.154 "bdev_passthru_delete", 00:04:37.154 "bdev_passthru_create", 00:04:37.154 "bdev_lvol_grow_lvstore", 00:04:37.154 "bdev_lvol_get_lvols", 00:04:37.154 "bdev_lvol_get_lvstores", 00:04:37.154 "bdev_lvol_delete", 00:04:37.154 "bdev_lvol_set_read_only", 00:04:37.154 "bdev_lvol_resize", 00:04:37.154 "bdev_lvol_decouple_parent", 00:04:37.154 "bdev_lvol_inflate", 00:04:37.154 "bdev_lvol_rename", 00:04:37.154 "bdev_lvol_clone_bdev", 00:04:37.154 "bdev_lvol_clone", 00:04:37.154 "bdev_lvol_snapshot", 00:04:37.154 "bdev_lvol_create", 00:04:37.154 "bdev_lvol_delete_lvstore", 00:04:37.154 "bdev_lvol_rename_lvstore", 00:04:37.154 "bdev_lvol_create_lvstore", 00:04:37.154 "bdev_raid_set_options", 00:04:37.154 "bdev_raid_remove_base_bdev", 00:04:37.154 "bdev_raid_add_base_bdev", 00:04:37.154 "bdev_raid_delete", 00:04:37.154 "bdev_raid_create", 00:04:37.154 "bdev_raid_get_bdevs", 00:04:37.154 "bdev_error_inject_error", 00:04:37.154 "bdev_error_delete", 00:04:37.154 "bdev_error_create", 00:04:37.154 "bdev_split_delete", 00:04:37.154 "bdev_split_create", 00:04:37.154 "bdev_delay_delete", 00:04:37.154 "bdev_delay_create", 00:04:37.154 "bdev_delay_update_latency", 00:04:37.154 "bdev_zone_block_delete", 00:04:37.154 "bdev_zone_block_create", 00:04:37.154 "blobfs_create", 00:04:37.154 "blobfs_detect", 00:04:37.154 "blobfs_set_cache_size", 00:04:37.154 "bdev_xnvme_delete", 00:04:37.154 "bdev_xnvme_create", 00:04:37.154 "bdev_aio_delete", 00:04:37.154 "bdev_aio_rescan", 00:04:37.154 "bdev_aio_create", 00:04:37.154 "bdev_ftl_set_property", 00:04:37.154 "bdev_ftl_get_properties", 00:04:37.154 "bdev_ftl_get_stats", 00:04:37.154 "bdev_ftl_unmap", 00:04:37.154 "bdev_ftl_unload", 00:04:37.154 "bdev_ftl_delete", 00:04:37.154 "bdev_ftl_load", 00:04:37.154 "bdev_ftl_create", 00:04:37.154 "bdev_virtio_attach_controller", 00:04:37.154 "bdev_virtio_scsi_get_devices", 00:04:37.154 "bdev_virtio_detach_controller", 00:04:37.154 "bdev_virtio_blk_set_hotplug", 00:04:37.154 "bdev_iscsi_delete", 00:04:37.154 "bdev_iscsi_create", 00:04:37.154 "bdev_iscsi_set_options", 00:04:37.154 "accel_error_inject_error", 00:04:37.154 "ioat_scan_accel_module", 00:04:37.154 "dsa_scan_accel_module", 00:04:37.154 "iaa_scan_accel_module", 00:04:37.154 "iscsi_set_options", 00:04:37.154 "iscsi_get_auth_groups", 00:04:37.154 "iscsi_auth_group_remove_secret", 00:04:37.154 "iscsi_auth_group_add_secret", 00:04:37.154 "iscsi_delete_auth_group", 00:04:37.154 "iscsi_create_auth_group", 00:04:37.154 "iscsi_set_discovery_auth", 00:04:37.154 "iscsi_get_options", 00:04:37.154 "iscsi_target_node_request_logout", 00:04:37.154 "iscsi_target_node_set_redirect", 00:04:37.154 "iscsi_target_node_set_auth", 00:04:37.154 "iscsi_target_node_add_lun", 00:04:37.154 "iscsi_get_connections", 00:04:37.154 "iscsi_portal_group_set_auth", 00:04:37.154 "iscsi_start_portal_group", 00:04:37.154 "iscsi_delete_portal_group", 00:04:37.154 "iscsi_create_portal_group", 00:04:37.154 "iscsi_get_portal_groups", 00:04:37.154 "iscsi_delete_target_node", 00:04:37.154 "iscsi_target_node_remove_pg_ig_maps", 00:04:37.154 "iscsi_target_node_add_pg_ig_maps", 00:04:37.154 "iscsi_create_target_node", 00:04:37.154 "iscsi_get_target_nodes", 00:04:37.154 "iscsi_delete_initiator_group", 00:04:37.154 "iscsi_initiator_group_remove_initiators", 00:04:37.154 "iscsi_initiator_group_add_initiators", 00:04:37.154 "iscsi_create_initiator_group", 00:04:37.154 "iscsi_get_initiator_groups", 00:04:37.154 "nvmf_set_crdt", 00:04:37.154 "nvmf_set_config", 00:04:37.154 "nvmf_set_max_subsystems", 00:04:37.154 "nvmf_subsystem_get_listeners", 00:04:37.154 "nvmf_subsystem_get_qpairs", 00:04:37.154 "nvmf_subsystem_get_controllers", 00:04:37.154 "nvmf_get_stats", 00:04:37.154 "nvmf_get_transports", 00:04:37.154 "nvmf_create_transport", 00:04:37.154 "nvmf_get_targets", 00:04:37.154 "nvmf_delete_target", 00:04:37.154 "nvmf_create_target", 00:04:37.154 "nvmf_subsystem_allow_any_host", 00:04:37.154 "nvmf_subsystem_remove_host", 00:04:37.154 "nvmf_subsystem_add_host", 00:04:37.154 "nvmf_subsystem_remove_ns", 00:04:37.154 "nvmf_subsystem_add_ns", 00:04:37.154 "nvmf_subsystem_listener_set_ana_state", 00:04:37.154 "nvmf_discovery_get_referrals", 00:04:37.154 "nvmf_discovery_remove_referral", 00:04:37.154 "nvmf_discovery_add_referral", 00:04:37.154 "nvmf_subsystem_remove_listener", 00:04:37.154 "nvmf_subsystem_add_listener", 00:04:37.154 "nvmf_delete_subsystem", 00:04:37.154 "nvmf_create_subsystem", 00:04:37.154 "nvmf_get_subsystems", 00:04:37.154 "env_dpdk_get_mem_stats", 00:04:37.154 "nbd_get_disks", 00:04:37.154 "nbd_stop_disk", 00:04:37.154 "nbd_start_disk", 00:04:37.154 "ublk_recover_disk", 00:04:37.154 "ublk_get_disks", 00:04:37.154 "ublk_stop_disk", 00:04:37.154 "ublk_start_disk", 00:04:37.154 "ublk_destroy_target", 00:04:37.154 "ublk_create_target", 00:04:37.154 "virtio_blk_create_transport", 00:04:37.154 "virtio_blk_get_transports", 00:04:37.154 "vhost_controller_set_coalescing", 00:04:37.154 "vhost_get_controllers", 00:04:37.154 "vhost_delete_controller", 00:04:37.154 "vhost_create_blk_controller", 00:04:37.154 "vhost_scsi_controller_remove_target", 00:04:37.154 "vhost_scsi_controller_add_target", 00:04:37.154 "vhost_start_scsi_controller", 00:04:37.154 "vhost_create_scsi_controller", 00:04:37.154 "thread_set_cpumask", 00:04:37.154 "framework_get_scheduler", 00:04:37.154 "framework_set_scheduler", 00:04:37.154 "framework_get_reactors", 00:04:37.154 "thread_get_io_channels", 00:04:37.154 "thread_get_pollers", 00:04:37.154 "thread_get_stats", 00:04:37.154 "framework_monitor_context_switch", 00:04:37.154 "spdk_kill_instance", 00:04:37.154 "log_enable_timestamps", 00:04:37.154 "log_get_flags", 00:04:37.154 "log_clear_flag", 00:04:37.154 "log_set_flag", 00:04:37.154 "log_get_level", 00:04:37.154 "log_set_level", 00:04:37.154 "log_get_print_level", 00:04:37.154 "log_set_print_level", 00:04:37.154 "framework_enable_cpumask_locks", 00:04:37.154 "framework_disable_cpumask_locks", 00:04:37.154 "framework_wait_init", 00:04:37.154 "framework_start_init", 00:04:37.154 "scsi_get_devices", 00:04:37.154 "bdev_get_histogram", 00:04:37.154 "bdev_enable_histogram", 00:04:37.154 "bdev_set_qos_limit", 00:04:37.154 "bdev_set_qd_sampling_period", 00:04:37.154 "bdev_get_bdevs", 00:04:37.154 "bdev_reset_iostat", 00:04:37.154 "bdev_get_iostat", 00:04:37.155 "bdev_examine", 00:04:37.155 "bdev_wait_for_examine", 00:04:37.155 "bdev_set_options", 00:04:37.155 "notify_get_notifications", 00:04:37.155 "notify_get_types", 00:04:37.155 "accel_get_stats", 00:04:37.155 "accel_set_options", 00:04:37.155 "accel_set_driver", 00:04:37.155 "accel_crypto_key_destroy", 00:04:37.155 "accel_crypto_keys_get", 00:04:37.155 "accel_crypto_key_create", 00:04:37.155 "accel_assign_opc", 00:04:37.155 "accel_get_module_info", 00:04:37.155 "accel_get_opc_assignments", 00:04:37.155 "vmd_rescan", 00:04:37.155 "vmd_remove_device", 00:04:37.155 "vmd_enable", 00:04:37.155 "sock_set_default_impl", 00:04:37.155 "sock_impl_set_options", 00:04:37.155 "sock_impl_get_options", 00:04:37.155 "iobuf_get_stats", 00:04:37.155 "iobuf_set_options", 00:04:37.155 "framework_get_pci_devices", 00:04:37.155 "framework_get_config", 00:04:37.155 "framework_get_subsystems", 00:04:37.155 "trace_get_info", 00:04:37.155 "trace_get_tpoint_group_mask", 00:04:37.155 "trace_disable_tpoint_group", 00:04:37.155 "trace_enable_tpoint_group", 00:04:37.155 "trace_clear_tpoint_mask", 00:04:37.155 "trace_set_tpoint_mask", 00:04:37.155 "spdk_get_version", 00:04:37.155 "rpc_get_methods" 00:04:37.155 ] 00:04:37.155 20:11:51 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:37.155 20:11:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:37.155 20:11:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.155 20:11:51 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:37.155 20:11:51 -- spdkcli/tcp.sh@38 -- # killprocess 56631 00:04:37.155 20:11:51 -- common/autotest_common.sh@926 -- # '[' -z 56631 ']' 00:04:37.155 20:11:51 -- common/autotest_common.sh@930 -- # kill -0 56631 00:04:37.155 20:11:51 -- common/autotest_common.sh@931 -- # uname 00:04:37.155 20:11:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:37.155 20:11:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56631 00:04:37.155 killing process with pid 56631 00:04:37.155 20:11:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:37.155 20:11:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:37.155 20:11:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56631' 00:04:37.155 20:11:51 -- common/autotest_common.sh@945 -- # kill 56631 00:04:37.155 20:11:51 -- common/autotest_common.sh@950 -- # wait 56631 00:04:38.536 ************************************ 00:04:38.536 END TEST spdkcli_tcp 00:04:38.536 ************************************ 00:04:38.536 00:04:38.536 real 0m3.349s 00:04:38.536 user 0m6.159s 00:04:38.536 sys 0m0.432s 00:04:38.536 20:11:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.536 20:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:38.536 20:11:53 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.536 20:11:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.536 20:11:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.536 20:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:38.536 ************************************ 00:04:38.536 START TEST dpdk_mem_utility 00:04:38.536 ************************************ 00:04:38.536 20:11:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.797 * Looking for test storage... 00:04:38.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:38.797 20:11:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:38.797 20:11:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56735 00:04:38.797 20:11:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56735 00:04:38.797 20:11:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.797 20:11:53 -- common/autotest_common.sh@819 -- # '[' -z 56735 ']' 00:04:38.797 20:11:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.797 20:11:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:38.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.797 20:11:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.797 20:11:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:38.797 20:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:38.797 [2024-10-16 20:11:53.587145] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:38.797 [2024-10-16 20:11:53.587444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56735 ] 00:04:39.057 [2024-10-16 20:11:53.727799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.057 [2024-10-16 20:11:53.865600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:39.057 [2024-10-16 20:11:53.865892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.630 20:11:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:39.630 20:11:54 -- common/autotest_common.sh@852 -- # return 0 00:04:39.630 20:11:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:39.630 20:11:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:39.630 20:11:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:39.630 20:11:54 -- common/autotest_common.sh@10 -- # set +x 00:04:39.630 { 00:04:39.630 "filename": "/tmp/spdk_mem_dump.txt" 00:04:39.630 } 00:04:39.630 20:11:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:39.630 20:11:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:39.630 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:39.630 1 heaps totaling size 820.000000 MiB 00:04:39.630 size: 820.000000 MiB heap id: 0 00:04:39.630 end heaps---------- 00:04:39.630 8 mempools totaling size 598.116089 MiB 00:04:39.630 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:39.630 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:39.630 size: 84.521057 MiB name: bdev_io_56735 00:04:39.630 size: 51.011292 MiB name: evtpool_56735 00:04:39.630 size: 50.003479 MiB name: msgpool_56735 00:04:39.630 size: 21.763794 MiB name: PDU_Pool 00:04:39.630 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:39.630 size: 0.026123 MiB name: Session_Pool 00:04:39.630 end mempools------- 00:04:39.630 6 memzones totaling size 4.142822 MiB 00:04:39.630 size: 1.000366 MiB name: RG_ring_0_56735 00:04:39.630 size: 1.000366 MiB name: RG_ring_1_56735 00:04:39.630 size: 1.000366 MiB name: RG_ring_4_56735 00:04:39.630 size: 1.000366 MiB name: RG_ring_5_56735 00:04:39.630 size: 0.125366 MiB name: RG_ring_2_56735 00:04:39.630 size: 0.015991 MiB name: RG_ring_3_56735 00:04:39.630 end memzones------- 00:04:39.630 20:11:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:39.630 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:04:39.630 list of free elements. size: 18.451538 MiB 00:04:39.630 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:39.630 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:39.630 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:39.630 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:39.630 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:39.630 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:39.630 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:39.630 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:39.630 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:39.630 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:39.630 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:39.630 element at address: 0x200000200000 with size: 0.829224 MiB 00:04:39.630 element at address: 0x20001b000000 with size: 0.564880 MiB 00:04:39.630 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:39.630 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:39.630 element at address: 0x200013800000 with size: 0.467651 MiB 00:04:39.630 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:39.630 element at address: 0x200003a00000 with size: 0.352234 MiB 00:04:39.630 list of standard malloc elements. size: 199.284058 MiB 00:04:39.630 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:39.630 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:39.630 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:39.630 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:39.630 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:39.630 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:39.630 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:39.630 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:39.630 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:39.630 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:39.630 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:39.630 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:39.630 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:39.630 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:39.630 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:39.630 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:39.630 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013877b80 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013877c80 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013877d80 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013877e80 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013877f80 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013878080 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:39.631 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:39.631 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:39.632 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:39.632 list of memzone associated elements. size: 602.264404 MiB 00:04:39.632 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:39.632 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:39.632 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:39.632 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:39.632 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:39.632 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56735_0 00:04:39.632 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:39.632 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56735_0 00:04:39.632 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:39.632 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56735_0 00:04:39.632 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:39.632 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:39.632 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:39.632 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:39.632 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:39.632 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56735 00:04:39.632 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:39.632 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56735 00:04:39.632 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:39.632 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56735 00:04:39.632 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:39.632 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:39.632 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:39.632 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:39.632 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:39.632 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:39.632 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:39.632 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:39.632 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:39.632 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56735 00:04:39.632 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:39.632 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56735 00:04:39.632 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:39.632 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56735 00:04:39.632 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:39.632 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56735 00:04:39.632 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:39.632 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56735 00:04:39.632 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:39.632 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:39.632 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:39.632 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:39.632 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:39.632 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:39.632 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:39.632 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56735 00:04:39.632 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:39.632 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:39.632 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:39.632 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:39.632 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:39.632 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56735 00:04:39.632 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:39.632 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:39.632 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:39.632 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56735 00:04:39.632 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:39.632 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56735 00:04:39.632 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:39.632 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:39.632 20:11:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:39.632 20:11:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56735 00:04:39.632 20:11:54 -- common/autotest_common.sh@926 -- # '[' -z 56735 ']' 00:04:39.632 20:11:54 -- common/autotest_common.sh@930 -- # kill -0 56735 00:04:39.632 20:11:54 -- common/autotest_common.sh@931 -- # uname 00:04:39.632 20:11:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:39.632 20:11:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56735 00:04:39.632 killing process with pid 56735 00:04:39.632 20:11:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:39.632 20:11:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:39.632 20:11:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56735' 00:04:39.632 20:11:54 -- common/autotest_common.sh@945 -- # kill 56735 00:04:39.632 20:11:54 -- common/autotest_common.sh@950 -- # wait 56735 00:04:41.015 00:04:41.015 real 0m2.218s 00:04:41.015 user 0m2.229s 00:04:41.015 sys 0m0.331s 00:04:41.015 20:11:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.015 20:11:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.015 ************************************ 00:04:41.015 END TEST dpdk_mem_utility 00:04:41.015 ************************************ 00:04:41.015 20:11:55 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:41.015 20:11:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.015 20:11:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.015 20:11:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.015 ************************************ 00:04:41.015 START TEST event 00:04:41.015 ************************************ 00:04:41.015 20:11:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:41.015 * Looking for test storage... 00:04:41.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:41.015 20:11:55 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:41.015 20:11:55 -- bdev/nbd_common.sh@6 -- # set -e 00:04:41.015 20:11:55 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:41.015 20:11:55 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:41.015 20:11:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.015 20:11:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.015 ************************************ 00:04:41.015 START TEST event_perf 00:04:41.015 ************************************ 00:04:41.015 20:11:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:41.015 Running I/O for 1 seconds...[2024-10-16 20:11:55.812153] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:41.015 [2024-10-16 20:11:55.812335] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56812 ] 00:04:41.275 [2024-10-16 20:11:55.960514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.275 [2024-10-16 20:11:56.112245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.275 [2024-10-16 20:11:56.112465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.275 [2024-10-16 20:11:56.112704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.275 [2024-10-16 20:11:56.112727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.657 Running I/O for 1 seconds... 00:04:42.657 lcore 0: 214254 00:04:42.657 lcore 1: 214256 00:04:42.657 lcore 2: 214259 00:04:42.657 lcore 3: 214260 00:04:42.657 done. 00:04:42.657 00:04:42.657 real 0m1.540s 00:04:42.657 ************************************ 00:04:42.657 END TEST event_perf 00:04:42.657 ************************************ 00:04:42.657 user 0m4.328s 00:04:42.657 sys 0m0.097s 00:04:42.657 20:11:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.657 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:04:42.657 20:11:57 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:42.657 20:11:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:42.657 20:11:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.657 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:04:42.657 ************************************ 00:04:42.657 START TEST event_reactor 00:04:42.657 ************************************ 00:04:42.657 20:11:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:42.657 [2024-10-16 20:11:57.395930] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:42.657 [2024-10-16 20:11:57.396030] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56857 ] 00:04:42.657 [2024-10-16 20:11:57.543640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.917 [2024-10-16 20:11:57.684885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.302 test_start 00:04:44.302 oneshot 00:04:44.302 tick 100 00:04:44.302 tick 100 00:04:44.302 tick 250 00:04:44.302 tick 100 00:04:44.302 tick 100 00:04:44.302 tick 250 00:04:44.302 tick 100 00:04:44.302 tick 500 00:04:44.302 tick 100 00:04:44.302 tick 100 00:04:44.302 tick 250 00:04:44.302 tick 100 00:04:44.302 tick 100 00:04:44.302 test_end 00:04:44.302 ************************************ 00:04:44.302 END TEST event_reactor 00:04:44.302 ************************************ 00:04:44.302 00:04:44.302 real 0m1.524s 00:04:44.302 user 0m1.343s 00:04:44.302 sys 0m0.073s 00:04:44.302 20:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.302 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.302 20:11:58 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:44.302 20:11:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:44.302 20:11:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.302 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.302 ************************************ 00:04:44.302 START TEST event_reactor_perf 00:04:44.302 ************************************ 00:04:44.302 20:11:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:44.302 [2024-10-16 20:11:58.960150] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:44.302 [2024-10-16 20:11:58.960372] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56888 ] 00:04:44.302 [2024-10-16 20:11:59.105566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.562 [2024-10-16 20:11:59.248200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.963 test_start 00:04:45.963 test_end 00:04:45.963 Performance: 406854 events per second 00:04:45.963 00:04:45.963 real 0m1.524s 00:04:45.963 user 0m1.350s 00:04:45.963 sys 0m0.067s 00:04:45.963 20:12:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.963 ************************************ 00:04:45.963 END TEST event_reactor_perf 00:04:45.963 ************************************ 00:04:45.963 20:12:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.963 20:12:00 -- event/event.sh@49 -- # uname -s 00:04:45.963 20:12:00 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:45.963 20:12:00 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:45.963 20:12:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.963 20:12:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.963 20:12:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.963 ************************************ 00:04:45.963 START TEST event_scheduler 00:04:45.963 ************************************ 00:04:45.963 20:12:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:45.963 * Looking for test storage... 00:04:45.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:45.964 20:12:00 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:45.964 20:12:00 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56955 00:04:45.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.964 20:12:00 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.964 20:12:00 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:45.964 20:12:00 -- scheduler/scheduler.sh@37 -- # waitforlisten 56955 00:04:45.964 20:12:00 -- common/autotest_common.sh@819 -- # '[' -z 56955 ']' 00:04:45.964 20:12:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.964 20:12:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:45.964 20:12:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.964 20:12:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:45.964 20:12:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.964 [2024-10-16 20:12:00.619866] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:45.964 [2024-10-16 20:12:00.620179] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56955 ] 00:04:45.964 [2024-10-16 20:12:00.770842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:46.224 [2024-10-16 20:12:00.954669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.224 [2024-10-16 20:12:00.954904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.224 [2024-10-16 20:12:00.955088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.224 [2024-10-16 20:12:00.955115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.797 20:12:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:46.797 20:12:01 -- common/autotest_common.sh@852 -- # return 0 00:04:46.797 20:12:01 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:46.797 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.797 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.797 POWER: Env isn't set yet! 00:04:46.797 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:46.797 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.797 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.797 POWER: Attempting to initialise PSTAT power management... 00:04:46.797 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.797 POWER: Cannot set governor of lcore 0 to performance 00:04:46.797 POWER: Attempting to initialise AMD PSTATE power management... 00:04:46.797 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.797 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.797 POWER: Attempting to initialise CPPC power management... 00:04:46.797 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.797 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.797 POWER: Attempting to initialise VM power management... 00:04:46.797 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:46.797 POWER: Unable to set Power Management Environment for lcore 0 00:04:46.797 [2024-10-16 20:12:01.440758] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:46.797 [2024-10-16 20:12:01.440786] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:46.797 [2024-10-16 20:12:01.440808] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:46.797 [2024-10-16 20:12:01.440835] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:46.797 [2024-10-16 20:12:01.440883] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:46.797 [2024-10-16 20:12:01.440905] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:46.797 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:46.797 20:12:01 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:46.797 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.797 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.797 [2024-10-16 20:12:01.661726] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:46.797 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:46.797 20:12:01 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:46.797 20:12:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.797 20:12:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.797 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.797 ************************************ 00:04:46.797 START TEST scheduler_create_thread 00:04:46.797 ************************************ 00:04:46.797 20:12:01 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:46.797 20:12:01 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:46.797 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.797 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.797 2 00:04:46.797 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:46.797 20:12:01 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:46.797 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.797 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.797 3 00:04:46.797 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:46.797 20:12:01 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:46.797 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.797 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.797 4 00:04:46.797 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:46.797 20:12:01 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:46.797 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.797 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.797 5 00:04:46.797 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:46.797 20:12:01 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:46.797 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.797 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.797 6 00:04:46.797 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:46.797 20:12:01 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:46.797 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:46.797 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.058 7 00:04:47.058 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.058 20:12:01 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:47.058 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.058 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.058 8 00:04:47.058 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.058 20:12:01 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:47.058 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.058 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.058 9 00:04:47.058 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.058 20:12:01 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:47.058 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.058 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.058 10 00:04:47.058 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.058 20:12:01 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:47.058 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.058 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.058 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.058 20:12:01 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:47.058 20:12:01 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:47.058 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.058 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.058 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.058 20:12:01 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:47.058 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.058 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.058 20:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.058 20:12:01 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:47.058 20:12:01 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:47.058 20:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.058 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.629 ************************************ 00:04:47.629 END TEST scheduler_create_thread 00:04:47.629 ************************************ 00:04:47.629 20:12:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.629 00:04:47.629 real 0m0.591s 00:04:47.629 user 0m0.013s 00:04:47.629 sys 0m0.005s 00:04:47.629 20:12:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.629 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:04:47.629 20:12:02 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:47.629 20:12:02 -- scheduler/scheduler.sh@46 -- # killprocess 56955 00:04:47.629 20:12:02 -- common/autotest_common.sh@926 -- # '[' -z 56955 ']' 00:04:47.629 20:12:02 -- common/autotest_common.sh@930 -- # kill -0 56955 00:04:47.629 20:12:02 -- common/autotest_common.sh@931 -- # uname 00:04:47.629 20:12:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:47.629 20:12:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56955 00:04:47.629 killing process with pid 56955 00:04:47.629 20:12:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:47.629 20:12:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:47.629 20:12:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56955' 00:04:47.629 20:12:02 -- common/autotest_common.sh@945 -- # kill 56955 00:04:47.629 20:12:02 -- common/autotest_common.sh@950 -- # wait 56955 00:04:47.889 [2024-10-16 20:12:02.746605] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:48.831 00:04:48.831 real 0m2.922s 00:04:48.831 user 0m5.305s 00:04:48.831 sys 0m0.310s 00:04:48.831 20:12:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.831 ************************************ 00:04:48.831 END TEST event_scheduler 00:04:48.831 ************************************ 00:04:48.831 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:04:48.831 20:12:03 -- event/event.sh@51 -- # modprobe -n nbd 00:04:48.831 20:12:03 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:48.831 20:12:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.831 20:12:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.831 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:04:48.831 ************************************ 00:04:48.831 START TEST app_repeat 00:04:48.831 ************************************ 00:04:48.831 20:12:03 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:48.831 20:12:03 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.832 20:12:03 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.832 20:12:03 -- event/event.sh@13 -- # local nbd_list 00:04:48.832 20:12:03 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.832 20:12:03 -- event/event.sh@14 -- # local bdev_list 00:04:48.832 20:12:03 -- event/event.sh@15 -- # local repeat_times=4 00:04:48.832 20:12:03 -- event/event.sh@17 -- # modprobe nbd 00:04:48.832 20:12:03 -- event/event.sh@19 -- # repeat_pid=57039 00:04:48.832 20:12:03 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.832 Process app_repeat pid: 57039 00:04:48.832 20:12:03 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57039' 00:04:48.832 20:12:03 -- event/event.sh@23 -- # for i in {0..2} 00:04:48.832 spdk_app_start Round 0 00:04:48.832 20:12:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:48.832 20:12:03 -- event/event.sh@25 -- # waitforlisten 57039 /var/tmp/spdk-nbd.sock 00:04:48.832 20:12:03 -- common/autotest_common.sh@819 -- # '[' -z 57039 ']' 00:04:48.832 20:12:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.832 20:12:03 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:48.832 20:12:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:48.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.832 20:12:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.832 20:12:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:48.832 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:04:48.832 [2024-10-16 20:12:03.505520] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:48.832 [2024-10-16 20:12:03.505933] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57039 ] 00:04:48.832 [2024-10-16 20:12:03.654852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.092 [2024-10-16 20:12:03.823919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.093 [2024-10-16 20:12:03.824008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.664 20:12:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:49.664 20:12:04 -- common/autotest_common.sh@852 -- # return 0 00:04:49.664 20:12:04 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.664 Malloc0 00:04:49.664 20:12:04 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.925 Malloc1 00:04:49.926 20:12:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@12 -- # local i 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.926 20:12:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.186 /dev/nbd0 00:04:50.186 20:12:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.186 20:12:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.186 20:12:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:50.186 20:12:04 -- common/autotest_common.sh@857 -- # local i 00:04:50.186 20:12:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:50.186 20:12:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:50.186 20:12:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:50.186 20:12:05 -- common/autotest_common.sh@861 -- # break 00:04:50.186 20:12:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:50.186 20:12:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:50.186 20:12:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.186 1+0 records in 00:04:50.186 1+0 records out 00:04:50.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265705 s, 15.4 MB/s 00:04:50.186 20:12:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.186 20:12:05 -- common/autotest_common.sh@874 -- # size=4096 00:04:50.186 20:12:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.186 20:12:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:50.186 20:12:05 -- common/autotest_common.sh@877 -- # return 0 00:04:50.186 20:12:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.186 20:12:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.186 20:12:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.447 /dev/nbd1 00:04:50.447 20:12:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.447 20:12:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.447 20:12:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:50.447 20:12:05 -- common/autotest_common.sh@857 -- # local i 00:04:50.447 20:12:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:50.447 20:12:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:50.447 20:12:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:50.447 20:12:05 -- common/autotest_common.sh@861 -- # break 00:04:50.447 20:12:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:50.447 20:12:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:50.447 20:12:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.447 1+0 records in 00:04:50.447 1+0 records out 00:04:50.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025891 s, 15.8 MB/s 00:04:50.447 20:12:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.447 20:12:05 -- common/autotest_common.sh@874 -- # size=4096 00:04:50.447 20:12:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.447 20:12:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:50.447 20:12:05 -- common/autotest_common.sh@877 -- # return 0 00:04:50.447 20:12:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.447 20:12:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.447 20:12:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.447 20:12:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.447 20:12:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.448 20:12:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.448 { 00:04:50.448 "nbd_device": "/dev/nbd0", 00:04:50.448 "bdev_name": "Malloc0" 00:04:50.448 }, 00:04:50.448 { 00:04:50.448 "nbd_device": "/dev/nbd1", 00:04:50.448 "bdev_name": "Malloc1" 00:04:50.448 } 00:04:50.448 ]' 00:04:50.448 20:12:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.448 20:12:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.448 { 00:04:50.448 "nbd_device": "/dev/nbd0", 00:04:50.448 "bdev_name": "Malloc0" 00:04:50.448 }, 00:04:50.448 { 00:04:50.448 "nbd_device": "/dev/nbd1", 00:04:50.448 "bdev_name": "Malloc1" 00:04:50.448 } 00:04:50.448 ]' 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.709 /dev/nbd1' 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.709 /dev/nbd1' 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.709 256+0 records in 00:04:50.709 256+0 records out 00:04:50.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063827 s, 164 MB/s 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.709 256+0 records in 00:04:50.709 256+0 records out 00:04:50.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197896 s, 53.0 MB/s 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.709 256+0 records in 00:04:50.709 256+0 records out 00:04:50.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224513 s, 46.7 MB/s 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@51 -- # local i 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.709 20:12:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@41 -- # break 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@41 -- # break 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.971 20:12:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.231 20:12:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.231 20:12:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.231 20:12:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.231 20:12:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.231 20:12:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.231 20:12:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.232 20:12:06 -- bdev/nbd_common.sh@65 -- # true 00:04:51.232 20:12:06 -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.232 20:12:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.232 20:12:06 -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.232 20:12:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.232 20:12:06 -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.232 20:12:06 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.492 20:12:06 -- event/event.sh@35 -- # sleep 3 00:04:52.436 [2024-10-16 20:12:07.287395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.696 [2024-10-16 20:12:07.543383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.696 [2024-10-16 20:12:07.543490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.956 [2024-10-16 20:12:07.697951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.956 [2024-10-16 20:12:07.698014] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.861 spdk_app_start Round 1 00:04:54.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.861 20:12:09 -- event/event.sh@23 -- # for i in {0..2} 00:04:54.861 20:12:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:54.861 20:12:09 -- event/event.sh@25 -- # waitforlisten 57039 /var/tmp/spdk-nbd.sock 00:04:54.861 20:12:09 -- common/autotest_common.sh@819 -- # '[' -z 57039 ']' 00:04:54.861 20:12:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.861 20:12:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.861 20:12:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.861 20:12:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.861 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.861 20:12:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:54.861 20:12:09 -- common/autotest_common.sh@852 -- # return 0 00:04:54.861 20:12:09 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.118 Malloc0 00:04:55.118 20:12:09 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.118 Malloc1 00:04:55.376 20:12:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@12 -- # local i 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.376 /dev/nbd0 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.376 20:12:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:55.376 20:12:10 -- common/autotest_common.sh@857 -- # local i 00:04:55.376 20:12:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:55.376 20:12:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:55.376 20:12:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:55.376 20:12:10 -- common/autotest_common.sh@861 -- # break 00:04:55.376 20:12:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:55.376 20:12:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:55.376 20:12:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.376 1+0 records in 00:04:55.376 1+0 records out 00:04:55.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025424 s, 16.1 MB/s 00:04:55.376 20:12:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.376 20:12:10 -- common/autotest_common.sh@874 -- # size=4096 00:04:55.376 20:12:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.376 20:12:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:55.376 20:12:10 -- common/autotest_common.sh@877 -- # return 0 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.376 20:12:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.634 /dev/nbd1 00:04:55.634 20:12:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.634 20:12:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.634 20:12:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:55.634 20:12:10 -- common/autotest_common.sh@857 -- # local i 00:04:55.634 20:12:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:55.634 20:12:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:55.634 20:12:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:55.634 20:12:10 -- common/autotest_common.sh@861 -- # break 00:04:55.634 20:12:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:55.634 20:12:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:55.634 20:12:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.634 1+0 records in 00:04:55.634 1+0 records out 00:04:55.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163402 s, 25.1 MB/s 00:04:55.634 20:12:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.634 20:12:10 -- common/autotest_common.sh@874 -- # size=4096 00:04:55.634 20:12:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.634 20:12:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:55.634 20:12:10 -- common/autotest_common.sh@877 -- # return 0 00:04:55.634 20:12:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.634 20:12:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.634 20:12:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.634 20:12:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.634 20:12:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.893 { 00:04:55.893 "nbd_device": "/dev/nbd0", 00:04:55.893 "bdev_name": "Malloc0" 00:04:55.893 }, 00:04:55.893 { 00:04:55.893 "nbd_device": "/dev/nbd1", 00:04:55.893 "bdev_name": "Malloc1" 00:04:55.893 } 00:04:55.893 ]' 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.893 { 00:04:55.893 "nbd_device": "/dev/nbd0", 00:04:55.893 "bdev_name": "Malloc0" 00:04:55.893 }, 00:04:55.893 { 00:04:55.893 "nbd_device": "/dev/nbd1", 00:04:55.893 "bdev_name": "Malloc1" 00:04:55.893 } 00:04:55.893 ]' 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.893 /dev/nbd1' 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.893 /dev/nbd1' 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.893 256+0 records in 00:04:55.893 256+0 records out 00:04:55.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118996 s, 88.1 MB/s 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.893 256+0 records in 00:04:55.893 256+0 records out 00:04:55.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163164 s, 64.3 MB/s 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.893 256+0 records in 00:04:55.893 256+0 records out 00:04:55.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191724 s, 54.7 MB/s 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@51 -- # local i 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.893 20:12:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.151 20:12:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.151 20:12:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.151 20:12:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.151 20:12:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.151 20:12:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.151 20:12:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.151 20:12:10 -- bdev/nbd_common.sh@41 -- # break 00:04:56.151 20:12:10 -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.151 20:12:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.151 20:12:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@41 -- # break 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.409 20:12:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@65 -- # true 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.667 20:12:11 -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.667 20:12:11 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.925 20:12:11 -- event/event.sh@35 -- # sleep 3 00:04:57.491 [2024-10-16 20:12:12.358841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.749 [2024-10-16 20:12:12.507429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.749 [2024-10-16 20:12:12.507510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.749 [2024-10-16 20:12:12.623603] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.749 [2024-10-16 20:12:12.623672] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.278 20:12:14 -- event/event.sh@23 -- # for i in {0..2} 00:05:00.278 spdk_app_start Round 2 00:05:00.278 20:12:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:00.278 20:12:14 -- event/event.sh@25 -- # waitforlisten 57039 /var/tmp/spdk-nbd.sock 00:05:00.278 20:12:14 -- common/autotest_common.sh@819 -- # '[' -z 57039 ']' 00:05:00.278 20:12:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.278 20:12:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:00.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.278 20:12:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.278 20:12:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:00.278 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:05:00.278 20:12:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:00.278 20:12:14 -- common/autotest_common.sh@852 -- # return 0 00:05:00.278 20:12:14 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.278 Malloc0 00:05:00.278 20:12:15 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.536 Malloc1 00:05:00.536 20:12:15 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@12 -- # local i 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.536 20:12:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.800 /dev/nbd0 00:05:00.800 20:12:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.800 20:12:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.800 20:12:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:00.800 20:12:15 -- common/autotest_common.sh@857 -- # local i 00:05:00.800 20:12:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:00.800 20:12:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:00.800 20:12:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:00.800 20:12:15 -- common/autotest_common.sh@861 -- # break 00:05:00.800 20:12:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:00.800 20:12:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:00.800 20:12:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.800 1+0 records in 00:05:00.800 1+0 records out 00:05:00.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268868 s, 15.2 MB/s 00:05:00.800 20:12:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.800 20:12:15 -- common/autotest_common.sh@874 -- # size=4096 00:05:00.800 20:12:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.800 20:12:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:00.800 20:12:15 -- common/autotest_common.sh@877 -- # return 0 00:05:00.800 20:12:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.800 20:12:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.800 20:12:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.800 /dev/nbd1 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.062 20:12:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:01.062 20:12:15 -- common/autotest_common.sh@857 -- # local i 00:05:01.062 20:12:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:01.062 20:12:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:01.062 20:12:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:01.062 20:12:15 -- common/autotest_common.sh@861 -- # break 00:05:01.062 20:12:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:01.062 20:12:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:01.062 20:12:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.062 1+0 records in 00:05:01.062 1+0 records out 00:05:01.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000140683 s, 29.1 MB/s 00:05:01.062 20:12:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.062 20:12:15 -- common/autotest_common.sh@874 -- # size=4096 00:05:01.062 20:12:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.062 20:12:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:01.062 20:12:15 -- common/autotest_common.sh@877 -- # return 0 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.062 { 00:05:01.062 "nbd_device": "/dev/nbd0", 00:05:01.062 "bdev_name": "Malloc0" 00:05:01.062 }, 00:05:01.062 { 00:05:01.062 "nbd_device": "/dev/nbd1", 00:05:01.062 "bdev_name": "Malloc1" 00:05:01.062 } 00:05:01.062 ]' 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.062 { 00:05:01.062 "nbd_device": "/dev/nbd0", 00:05:01.062 "bdev_name": "Malloc0" 00:05:01.062 }, 00:05:01.062 { 00:05:01.062 "nbd_device": "/dev/nbd1", 00:05:01.062 "bdev_name": "Malloc1" 00:05:01.062 } 00:05:01.062 ]' 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.062 /dev/nbd1' 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.062 /dev/nbd1' 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.062 20:12:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.321 256+0 records in 00:05:01.321 256+0 records out 00:05:01.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00562941 s, 186 MB/s 00:05:01.321 20:12:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.321 20:12:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.321 256+0 records in 00:05:01.321 256+0 records out 00:05:01.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160553 s, 65.3 MB/s 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.321 256+0 records in 00:05:01.321 256+0 records out 00:05:01.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173245 s, 60.5 MB/s 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@51 -- # local i 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@41 -- # break 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.321 20:12:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@41 -- # break 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.580 20:12:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@65 -- # true 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.837 20:12:16 -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.837 20:12:16 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.096 20:12:16 -- event/event.sh@35 -- # sleep 3 00:05:03.027 [2024-10-16 20:12:17.606113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.027 [2024-10-16 20:12:17.760782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.027 [2024-10-16 20:12:17.760802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.027 [2024-10-16 20:12:17.875732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:03.027 [2024-10-16 20:12:17.875788] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.557 20:12:19 -- event/event.sh@38 -- # waitforlisten 57039 /var/tmp/spdk-nbd.sock 00:05:05.557 20:12:19 -- common/autotest_common.sh@819 -- # '[' -z 57039 ']' 00:05:05.557 20:12:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.557 20:12:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:05.557 20:12:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.557 20:12:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:05.557 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.557 20:12:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:05.557 20:12:20 -- common/autotest_common.sh@852 -- # return 0 00:05:05.557 20:12:20 -- event/event.sh@39 -- # killprocess 57039 00:05:05.557 20:12:20 -- common/autotest_common.sh@926 -- # '[' -z 57039 ']' 00:05:05.557 20:12:20 -- common/autotest_common.sh@930 -- # kill -0 57039 00:05:05.557 20:12:20 -- common/autotest_common.sh@931 -- # uname 00:05:05.557 20:12:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:05.557 20:12:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57039 00:05:05.557 killing process with pid 57039 00:05:05.557 20:12:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:05.557 20:12:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:05.557 20:12:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57039' 00:05:05.557 20:12:20 -- common/autotest_common.sh@945 -- # kill 57039 00:05:05.557 20:12:20 -- common/autotest_common.sh@950 -- # wait 57039 00:05:06.125 spdk_app_start is called in Round 0. 00:05:06.125 Shutdown signal received, stop current app iteration 00:05:06.125 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:06.125 spdk_app_start is called in Round 1. 00:05:06.125 Shutdown signal received, stop current app iteration 00:05:06.125 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:06.125 spdk_app_start is called in Round 2. 00:05:06.125 Shutdown signal received, stop current app iteration 00:05:06.125 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:06.125 spdk_app_start is called in Round 3. 00:05:06.125 Shutdown signal received, stop current app iteration 00:05:06.125 ************************************ 00:05:06.125 END TEST app_repeat 00:05:06.125 ************************************ 00:05:06.125 20:12:20 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:06.125 20:12:20 -- event/event.sh@42 -- # return 0 00:05:06.125 00:05:06.125 real 0m17.340s 00:05:06.125 user 0m36.738s 00:05:06.125 sys 0m2.046s 00:05:06.125 20:12:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.125 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.125 20:12:20 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:06.125 20:12:20 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:06.125 20:12:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.125 20:12:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.125 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.125 ************************************ 00:05:06.125 START TEST cpu_locks 00:05:06.125 ************************************ 00:05:06.125 20:12:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:06.125 * Looking for test storage... 00:05:06.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:06.125 20:12:20 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:06.125 20:12:20 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:06.125 20:12:20 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:06.125 20:12:20 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:06.125 20:12:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.125 20:12:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.125 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.125 ************************************ 00:05:06.125 START TEST default_locks 00:05:06.125 ************************************ 00:05:06.125 20:12:20 -- common/autotest_common.sh@1104 -- # default_locks 00:05:06.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.125 20:12:20 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57455 00:05:06.125 20:12:20 -- event/cpu_locks.sh@47 -- # waitforlisten 57455 00:05:06.125 20:12:20 -- common/autotest_common.sh@819 -- # '[' -z 57455 ']' 00:05:06.125 20:12:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.125 20:12:20 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.125 20:12:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:06.125 20:12:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.125 20:12:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:06.125 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.125 [2024-10-16 20:12:21.017488] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:06.125 [2024-10-16 20:12:21.018277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57455 ] 00:05:06.385 [2024-10-16 20:12:21.165063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.644 [2024-10-16 20:12:21.338330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.644 [2024-10-16 20:12:21.338525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.016 20:12:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:08.016 20:12:22 -- common/autotest_common.sh@852 -- # return 0 00:05:08.016 20:12:22 -- event/cpu_locks.sh@49 -- # locks_exist 57455 00:05:08.016 20:12:22 -- event/cpu_locks.sh@22 -- # lslocks -p 57455 00:05:08.016 20:12:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.016 20:12:22 -- event/cpu_locks.sh@50 -- # killprocess 57455 00:05:08.016 20:12:22 -- common/autotest_common.sh@926 -- # '[' -z 57455 ']' 00:05:08.016 20:12:22 -- common/autotest_common.sh@930 -- # kill -0 57455 00:05:08.016 20:12:22 -- common/autotest_common.sh@931 -- # uname 00:05:08.016 20:12:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:08.016 20:12:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57455 00:05:08.016 20:12:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:08.016 killing process with pid 57455 00:05:08.016 20:12:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:08.016 20:12:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57455' 00:05:08.016 20:12:22 -- common/autotest_common.sh@945 -- # kill 57455 00:05:08.016 20:12:22 -- common/autotest_common.sh@950 -- # wait 57455 00:05:09.390 20:12:23 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57455 00:05:09.390 20:12:23 -- common/autotest_common.sh@640 -- # local es=0 00:05:09.390 20:12:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57455 00:05:09.390 20:12:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:09.390 20:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:09.390 20:12:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:09.390 20:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:09.390 20:12:23 -- common/autotest_common.sh@643 -- # waitforlisten 57455 00:05:09.390 20:12:23 -- common/autotest_common.sh@819 -- # '[' -z 57455 ']' 00:05:09.390 20:12:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.390 20:12:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.390 20:12:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.390 20:12:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.390 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.390 ERROR: process (pid: 57455) is no longer running 00:05:09.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57455) - No such process 00:05:09.390 20:12:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.390 20:12:23 -- common/autotest_common.sh@852 -- # return 1 00:05:09.390 20:12:23 -- common/autotest_common.sh@643 -- # es=1 00:05:09.390 20:12:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:09.390 20:12:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:09.390 20:12:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:09.390 20:12:23 -- event/cpu_locks.sh@54 -- # no_locks 00:05:09.390 20:12:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.390 20:12:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.390 20:12:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.390 00:05:09.390 real 0m3.041s 00:05:09.390 user 0m3.133s 00:05:09.390 sys 0m0.487s 00:05:09.390 ************************************ 00:05:09.390 END TEST default_locks 00:05:09.390 20:12:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.390 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.390 ************************************ 00:05:09.390 20:12:24 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:09.390 20:12:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.390 20:12:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.390 20:12:24 -- common/autotest_common.sh@10 -- # set +x 00:05:09.390 ************************************ 00:05:09.390 START TEST default_locks_via_rpc 00:05:09.390 ************************************ 00:05:09.390 20:12:24 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:09.390 20:12:24 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57521 00:05:09.390 20:12:24 -- event/cpu_locks.sh@63 -- # waitforlisten 57521 00:05:09.390 20:12:24 -- common/autotest_common.sh@819 -- # '[' -z 57521 ']' 00:05:09.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.390 20:12:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.390 20:12:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.390 20:12:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.390 20:12:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.390 20:12:24 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.390 20:12:24 -- common/autotest_common.sh@10 -- # set +x 00:05:09.390 [2024-10-16 20:12:24.124180] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:09.390 [2024-10-16 20:12:24.124311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57521 ] 00:05:09.390 [2024-10-16 20:12:24.273346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.648 [2024-10-16 20:12:24.464305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:09.648 [2024-10-16 20:12:24.464503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.020 20:12:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.020 20:12:25 -- common/autotest_common.sh@852 -- # return 0 00:05:11.020 20:12:25 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:11.020 20:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.020 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.020 20:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.020 20:12:25 -- event/cpu_locks.sh@67 -- # no_locks 00:05:11.020 20:12:25 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.020 20:12:25 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.020 20:12:25 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.021 20:12:25 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.021 20:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.021 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.021 20:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.021 20:12:25 -- event/cpu_locks.sh@71 -- # locks_exist 57521 00:05:11.021 20:12:25 -- event/cpu_locks.sh@22 -- # lslocks -p 57521 00:05:11.021 20:12:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.021 20:12:25 -- event/cpu_locks.sh@73 -- # killprocess 57521 00:05:11.021 20:12:25 -- common/autotest_common.sh@926 -- # '[' -z 57521 ']' 00:05:11.021 20:12:25 -- common/autotest_common.sh@930 -- # kill -0 57521 00:05:11.021 20:12:25 -- common/autotest_common.sh@931 -- # uname 00:05:11.021 20:12:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:11.021 20:12:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57521 00:05:11.021 20:12:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:11.021 killing process with pid 57521 00:05:11.021 20:12:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:11.021 20:12:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57521' 00:05:11.021 20:12:25 -- common/autotest_common.sh@945 -- # kill 57521 00:05:11.021 20:12:25 -- common/autotest_common.sh@950 -- # wait 57521 00:05:12.395 00:05:12.395 real 0m3.044s 00:05:12.395 user 0m3.107s 00:05:12.395 sys 0m0.519s 00:05:12.395 20:12:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.395 ************************************ 00:05:12.395 END TEST default_locks_via_rpc 00:05:12.395 ************************************ 00:05:12.395 20:12:27 -- common/autotest_common.sh@10 -- # set +x 00:05:12.395 20:12:27 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:12.395 20:12:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:12.395 20:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.395 20:12:27 -- common/autotest_common.sh@10 -- # set +x 00:05:12.395 ************************************ 00:05:12.395 START TEST non_locking_app_on_locked_coremask 00:05:12.395 ************************************ 00:05:12.395 20:12:27 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:12.395 20:12:27 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57586 00:05:12.395 20:12:27 -- event/cpu_locks.sh@81 -- # waitforlisten 57586 /var/tmp/spdk.sock 00:05:12.395 20:12:27 -- common/autotest_common.sh@819 -- # '[' -z 57586 ']' 00:05:12.395 20:12:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.395 20:12:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.395 20:12:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.395 20:12:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.395 20:12:27 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.395 20:12:27 -- common/autotest_common.sh@10 -- # set +x 00:05:12.395 [2024-10-16 20:12:27.221329] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:12.395 [2024-10-16 20:12:27.221438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57586 ] 00:05:12.653 [2024-10-16 20:12:27.364996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.653 [2024-10-16 20:12:27.546845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.653 [2024-10-16 20:12:27.547036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.024 20:12:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.024 20:12:28 -- common/autotest_common.sh@852 -- # return 0 00:05:14.024 20:12:28 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:14.024 20:12:28 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57610 00:05:14.024 20:12:28 -- event/cpu_locks.sh@85 -- # waitforlisten 57610 /var/tmp/spdk2.sock 00:05:14.024 20:12:28 -- common/autotest_common.sh@819 -- # '[' -z 57610 ']' 00:05:14.024 20:12:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.024 20:12:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:14.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.024 20:12:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.024 20:12:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:14.024 20:12:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.024 [2024-10-16 20:12:28.777399] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:14.024 [2024-10-16 20:12:28.777511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57610 ] 00:05:14.024 [2024-10-16 20:12:28.926518] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.024 [2024-10-16 20:12:28.926565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.590 [2024-10-16 20:12:29.277086] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:14.591 [2024-10-16 20:12:29.277266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.524 20:12:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:15.524 20:12:30 -- common/autotest_common.sh@852 -- # return 0 00:05:15.524 20:12:30 -- event/cpu_locks.sh@87 -- # locks_exist 57586 00:05:15.524 20:12:30 -- event/cpu_locks.sh@22 -- # lslocks -p 57586 00:05:15.524 20:12:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.782 20:12:30 -- event/cpu_locks.sh@89 -- # killprocess 57586 00:05:15.782 20:12:30 -- common/autotest_common.sh@926 -- # '[' -z 57586 ']' 00:05:15.782 20:12:30 -- common/autotest_common.sh@930 -- # kill -0 57586 00:05:15.782 20:12:30 -- common/autotest_common.sh@931 -- # uname 00:05:15.782 20:12:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:15.782 20:12:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57586 00:05:15.782 20:12:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:15.782 20:12:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:15.782 killing process with pid 57586 00:05:15.782 20:12:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57586' 00:05:15.782 20:12:30 -- common/autotest_common.sh@945 -- # kill 57586 00:05:15.782 20:12:30 -- common/autotest_common.sh@950 -- # wait 57586 00:05:18.310 20:12:33 -- event/cpu_locks.sh@90 -- # killprocess 57610 00:05:18.310 20:12:33 -- common/autotest_common.sh@926 -- # '[' -z 57610 ']' 00:05:18.310 20:12:33 -- common/autotest_common.sh@930 -- # kill -0 57610 00:05:18.310 20:12:33 -- common/autotest_common.sh@931 -- # uname 00:05:18.310 20:12:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:18.310 20:12:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57610 00:05:18.310 20:12:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:18.310 20:12:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:18.310 killing process with pid 57610 00:05:18.310 20:12:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57610' 00:05:18.310 20:12:33 -- common/autotest_common.sh@945 -- # kill 57610 00:05:18.310 20:12:33 -- common/autotest_common.sh@950 -- # wait 57610 00:05:19.687 00:05:19.687 real 0m7.339s 00:05:19.687 user 0m7.764s 00:05:19.687 sys 0m0.958s 00:05:19.687 20:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.687 ************************************ 00:05:19.687 END TEST non_locking_app_on_locked_coremask 00:05:19.687 20:12:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.687 ************************************ 00:05:19.687 20:12:34 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:19.687 20:12:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.687 20:12:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.687 20:12:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.687 ************************************ 00:05:19.687 START TEST locking_app_on_unlocked_coremask 00:05:19.687 ************************************ 00:05:19.687 20:12:34 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:19.687 20:12:34 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57708 00:05:19.687 20:12:34 -- event/cpu_locks.sh@99 -- # waitforlisten 57708 /var/tmp/spdk.sock 00:05:19.687 20:12:34 -- common/autotest_common.sh@819 -- # '[' -z 57708 ']' 00:05:19.688 20:12:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.688 20:12:34 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:19.688 20:12:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:19.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.688 20:12:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.688 20:12:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:19.688 20:12:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.688 [2024-10-16 20:12:34.601390] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:19.688 [2024-10-16 20:12:34.601508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57708 ] 00:05:19.947 [2024-10-16 20:12:34.747268] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.947 [2024-10-16 20:12:34.747312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.205 [2024-10-16 20:12:34.914890] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.205 [2024-10-16 20:12:34.915087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.580 20:12:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.580 20:12:36 -- common/autotest_common.sh@852 -- # return 0 00:05:21.580 20:12:36 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57732 00:05:21.580 20:12:36 -- event/cpu_locks.sh@103 -- # waitforlisten 57732 /var/tmp/spdk2.sock 00:05:21.580 20:12:36 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.580 20:12:36 -- common/autotest_common.sh@819 -- # '[' -z 57732 ']' 00:05:21.580 20:12:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.580 20:12:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:21.580 20:12:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.580 20:12:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:21.580 20:12:36 -- common/autotest_common.sh@10 -- # set +x 00:05:21.580 [2024-10-16 20:12:36.145282] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:21.580 [2024-10-16 20:12:36.145397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57732 ] 00:05:21.580 [2024-10-16 20:12:36.290249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.837 [2024-10-16 20:12:36.655215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.837 [2024-10-16 20:12:36.655402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.798 20:12:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:22.798 20:12:37 -- common/autotest_common.sh@852 -- # return 0 00:05:22.798 20:12:37 -- event/cpu_locks.sh@105 -- # locks_exist 57732 00:05:22.798 20:12:37 -- event/cpu_locks.sh@22 -- # lslocks -p 57732 00:05:22.798 20:12:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.057 20:12:37 -- event/cpu_locks.sh@107 -- # killprocess 57708 00:05:23.057 20:12:37 -- common/autotest_common.sh@926 -- # '[' -z 57708 ']' 00:05:23.057 20:12:37 -- common/autotest_common.sh@930 -- # kill -0 57708 00:05:23.057 20:12:37 -- common/autotest_common.sh@931 -- # uname 00:05:23.057 20:12:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:23.057 20:12:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57708 00:05:23.057 20:12:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:23.057 killing process with pid 57708 00:05:23.057 20:12:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:23.057 20:12:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57708' 00:05:23.057 20:12:37 -- common/autotest_common.sh@945 -- # kill 57708 00:05:23.057 20:12:37 -- common/autotest_common.sh@950 -- # wait 57708 00:05:26.338 20:12:40 -- event/cpu_locks.sh@108 -- # killprocess 57732 00:05:26.338 20:12:40 -- common/autotest_common.sh@926 -- # '[' -z 57732 ']' 00:05:26.338 20:12:40 -- common/autotest_common.sh@930 -- # kill -0 57732 00:05:26.338 20:12:40 -- common/autotest_common.sh@931 -- # uname 00:05:26.338 20:12:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:26.338 20:12:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57732 00:05:26.338 20:12:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:26.338 killing process with pid 57732 00:05:26.338 20:12:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:26.338 20:12:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57732' 00:05:26.338 20:12:40 -- common/autotest_common.sh@945 -- # kill 57732 00:05:26.338 20:12:40 -- common/autotest_common.sh@950 -- # wait 57732 00:05:26.905 00:05:26.905 real 0m7.289s 00:05:26.905 user 0m7.718s 00:05:26.905 sys 0m0.912s 00:05:26.905 20:12:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.905 ************************************ 00:05:26.905 END TEST locking_app_on_unlocked_coremask 00:05:26.905 ************************************ 00:05:26.905 20:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.163 20:12:41 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:27.163 20:12:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.163 20:12:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.163 20:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.163 ************************************ 00:05:27.163 START TEST locking_app_on_locked_coremask 00:05:27.163 ************************************ 00:05:27.163 20:12:41 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:27.163 20:12:41 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57836 00:05:27.163 20:12:41 -- event/cpu_locks.sh@116 -- # waitforlisten 57836 /var/tmp/spdk.sock 00:05:27.163 20:12:41 -- common/autotest_common.sh@819 -- # '[' -z 57836 ']' 00:05:27.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.163 20:12:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.163 20:12:41 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.163 20:12:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:27.163 20:12:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.163 20:12:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:27.163 20:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.163 [2024-10-16 20:12:41.950725] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:27.163 [2024-10-16 20:12:41.950846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57836 ] 00:05:27.422 [2024-10-16 20:12:42.100268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.422 [2024-10-16 20:12:42.288383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.422 [2024-10-16 20:12:42.288571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.833 20:12:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.833 20:12:43 -- common/autotest_common.sh@852 -- # return 0 00:05:28.833 20:12:43 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57854 00:05:28.833 20:12:43 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57854 /var/tmp/spdk2.sock 00:05:28.833 20:12:43 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:28.833 20:12:43 -- common/autotest_common.sh@640 -- # local es=0 00:05:28.833 20:12:43 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57854 /var/tmp/spdk2.sock 00:05:28.833 20:12:43 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:28.833 20:12:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:28.833 20:12:43 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:28.833 20:12:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:28.833 20:12:43 -- common/autotest_common.sh@643 -- # waitforlisten 57854 /var/tmp/spdk2.sock 00:05:28.833 20:12:43 -- common/autotest_common.sh@819 -- # '[' -z 57854 ']' 00:05:28.833 20:12:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.833 20:12:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:28.833 20:12:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.833 20:12:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:28.833 20:12:43 -- common/autotest_common.sh@10 -- # set +x 00:05:28.833 [2024-10-16 20:12:43.524504] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:28.833 [2024-10-16 20:12:43.524760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57854 ] 00:05:28.833 [2024-10-16 20:12:43.670134] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57836 has claimed it. 00:05:28.833 [2024-10-16 20:12:43.670182] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.398 ERROR: process (pid: 57854) is no longer running 00:05:29.398 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57854) - No such process 00:05:29.398 20:12:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:29.398 20:12:44 -- common/autotest_common.sh@852 -- # return 1 00:05:29.398 20:12:44 -- common/autotest_common.sh@643 -- # es=1 00:05:29.398 20:12:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:29.398 20:12:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:29.398 20:12:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:29.398 20:12:44 -- event/cpu_locks.sh@122 -- # locks_exist 57836 00:05:29.398 20:12:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.398 20:12:44 -- event/cpu_locks.sh@22 -- # lslocks -p 57836 00:05:29.656 20:12:44 -- event/cpu_locks.sh@124 -- # killprocess 57836 00:05:29.656 20:12:44 -- common/autotest_common.sh@926 -- # '[' -z 57836 ']' 00:05:29.656 20:12:44 -- common/autotest_common.sh@930 -- # kill -0 57836 00:05:29.656 20:12:44 -- common/autotest_common.sh@931 -- # uname 00:05:29.656 20:12:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:29.656 20:12:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57836 00:05:29.656 killing process with pid 57836 00:05:29.656 20:12:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:29.656 20:12:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:29.656 20:12:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57836' 00:05:29.656 20:12:44 -- common/autotest_common.sh@945 -- # kill 57836 00:05:29.656 20:12:44 -- common/autotest_common.sh@950 -- # wait 57836 00:05:31.031 ************************************ 00:05:31.031 END TEST locking_app_on_locked_coremask 00:05:31.031 ************************************ 00:05:31.031 00:05:31.031 real 0m3.795s 00:05:31.031 user 0m4.080s 00:05:31.031 sys 0m0.645s 00:05:31.031 20:12:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.031 20:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.031 20:12:45 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:31.031 20:12:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.031 20:12:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.031 20:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.031 ************************************ 00:05:31.031 START TEST locking_overlapped_coremask 00:05:31.031 ************************************ 00:05:31.031 20:12:45 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:31.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.031 20:12:45 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57911 00:05:31.031 20:12:45 -- event/cpu_locks.sh@133 -- # waitforlisten 57911 /var/tmp/spdk.sock 00:05:31.031 20:12:45 -- common/autotest_common.sh@819 -- # '[' -z 57911 ']' 00:05:31.031 20:12:45 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:31.031 20:12:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.031 20:12:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.031 20:12:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.031 20:12:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.031 20:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.031 [2024-10-16 20:12:45.787752] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:31.031 [2024-10-16 20:12:45.787988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57911 ] 00:05:31.031 [2024-10-16 20:12:45.932198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.289 [2024-10-16 20:12:46.113869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.289 [2024-10-16 20:12:46.114335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.289 [2024-10-16 20:12:46.114597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.289 [2024-10-16 20:12:46.114685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.662 20:12:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.662 20:12:47 -- common/autotest_common.sh@852 -- # return 0 00:05:32.662 20:12:47 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57938 00:05:32.662 20:12:47 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57938 /var/tmp/spdk2.sock 00:05:32.662 20:12:47 -- common/autotest_common.sh@640 -- # local es=0 00:05:32.662 20:12:47 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:32.662 20:12:47 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57938 /var/tmp/spdk2.sock 00:05:32.662 20:12:47 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:32.662 20:12:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:32.662 20:12:47 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:32.662 20:12:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:32.662 20:12:47 -- common/autotest_common.sh@643 -- # waitforlisten 57938 /var/tmp/spdk2.sock 00:05:32.662 20:12:47 -- common/autotest_common.sh@819 -- # '[' -z 57938 ']' 00:05:32.662 20:12:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.662 20:12:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.662 20:12:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.662 20:12:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.662 20:12:47 -- common/autotest_common.sh@10 -- # set +x 00:05:32.662 [2024-10-16 20:12:47.354744] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:32.662 [2024-10-16 20:12:47.355359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57938 ] 00:05:32.662 [2024-10-16 20:12:47.511206] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57911 has claimed it. 00:05:32.662 [2024-10-16 20:12:47.511268] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.227 ERROR: process (pid: 57938) is no longer running 00:05:33.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57938) - No such process 00:05:33.227 20:12:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.227 20:12:47 -- common/autotest_common.sh@852 -- # return 1 00:05:33.227 20:12:47 -- common/autotest_common.sh@643 -- # es=1 00:05:33.227 20:12:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:33.227 20:12:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:33.227 20:12:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:33.227 20:12:47 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:33.227 20:12:47 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.227 20:12:47 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.227 20:12:47 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.227 20:12:47 -- event/cpu_locks.sh@141 -- # killprocess 57911 00:05:33.227 20:12:47 -- common/autotest_common.sh@926 -- # '[' -z 57911 ']' 00:05:33.227 20:12:47 -- common/autotest_common.sh@930 -- # kill -0 57911 00:05:33.228 20:12:47 -- common/autotest_common.sh@931 -- # uname 00:05:33.228 20:12:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:33.228 20:12:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57911 00:05:33.228 killing process with pid 57911 00:05:33.228 20:12:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:33.228 20:12:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:33.228 20:12:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57911' 00:05:33.228 20:12:47 -- common/autotest_common.sh@945 -- # kill 57911 00:05:33.228 20:12:47 -- common/autotest_common.sh@950 -- # wait 57911 00:05:34.629 00:05:34.629 real 0m3.551s 00:05:34.629 user 0m9.589s 00:05:34.629 sys 0m0.487s 00:05:34.629 20:12:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.629 ************************************ 00:05:34.629 END TEST locking_overlapped_coremask 00:05:34.629 ************************************ 00:05:34.629 20:12:49 -- common/autotest_common.sh@10 -- # set +x 00:05:34.629 20:12:49 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:34.629 20:12:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.629 20:12:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.629 20:12:49 -- common/autotest_common.sh@10 -- # set +x 00:05:34.629 ************************************ 00:05:34.629 START TEST locking_overlapped_coremask_via_rpc 00:05:34.629 ************************************ 00:05:34.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.629 20:12:49 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:34.629 20:12:49 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=57991 00:05:34.629 20:12:49 -- event/cpu_locks.sh@149 -- # waitforlisten 57991 /var/tmp/spdk.sock 00:05:34.629 20:12:49 -- common/autotest_common.sh@819 -- # '[' -z 57991 ']' 00:05:34.629 20:12:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.629 20:12:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.629 20:12:49 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:34.629 20:12:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.629 20:12:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.629 20:12:49 -- common/autotest_common.sh@10 -- # set +x 00:05:34.629 [2024-10-16 20:12:49.378039] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:34.629 [2024-10-16 20:12:49.378158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57991 ] 00:05:34.629 [2024-10-16 20:12:49.524916] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.629 [2024-10-16 20:12:49.524958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.887 [2024-10-16 20:12:49.692891] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.887 [2024-10-16 20:12:49.693196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.887 [2024-10-16 20:12:49.693604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.887 [2024-10-16 20:12:49.693626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.261 20:12:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.261 20:12:50 -- common/autotest_common.sh@852 -- # return 0 00:05:36.261 20:12:50 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58011 00:05:36.261 20:12:50 -- event/cpu_locks.sh@153 -- # waitforlisten 58011 /var/tmp/spdk2.sock 00:05:36.261 20:12:50 -- common/autotest_common.sh@819 -- # '[' -z 58011 ']' 00:05:36.261 20:12:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.261 20:12:50 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:36.261 20:12:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.261 20:12:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.261 20:12:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.261 20:12:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.261 [2024-10-16 20:12:50.938639] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:36.261 [2024-10-16 20:12:50.938791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58011 ] 00:05:36.261 [2024-10-16 20:12:51.086955] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.261 [2024-10-16 20:12:51.087007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.518 [2024-10-16 20:12:51.383360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.518 [2024-10-16 20:12:51.383638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.518 [2024-10-16 20:12:51.387134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.519 [2024-10-16 20:12:51.387165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:37.899 20:12:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.899 20:12:52 -- common/autotest_common.sh@852 -- # return 0 00:05:37.899 20:12:52 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.899 20:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.899 20:12:52 -- common/autotest_common.sh@10 -- # set +x 00:05:37.899 20:12:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.899 20:12:52 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.899 20:12:52 -- common/autotest_common.sh@640 -- # local es=0 00:05:37.899 20:12:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.899 20:12:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:37.899 20:12:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:37.899 20:12:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:37.899 20:12:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:37.899 20:12:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.899 20:12:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.899 20:12:52 -- common/autotest_common.sh@10 -- # set +x 00:05:37.899 [2024-10-16 20:12:52.455207] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57991 has claimed it. 00:05:37.899 request: 00:05:37.899 { 00:05:37.899 "method": "framework_enable_cpumask_locks", 00:05:37.899 "req_id": 1 00:05:37.899 } 00:05:37.899 Got JSON-RPC error response 00:05:37.899 response: 00:05:37.899 { 00:05:37.899 "code": -32603, 00:05:37.899 "message": "Failed to claim CPU core: 2" 00:05:37.899 } 00:05:37.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.899 20:12:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:37.899 20:12:52 -- common/autotest_common.sh@643 -- # es=1 00:05:37.899 20:12:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:37.899 20:12:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:37.899 20:12:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:37.899 20:12:52 -- event/cpu_locks.sh@158 -- # waitforlisten 57991 /var/tmp/spdk.sock 00:05:37.899 20:12:52 -- common/autotest_common.sh@819 -- # '[' -z 57991 ']' 00:05:37.899 20:12:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.899 20:12:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.899 20:12:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.899 20:12:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.899 20:12:52 -- common/autotest_common.sh@10 -- # set +x 00:05:37.899 20:12:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.899 20:12:52 -- common/autotest_common.sh@852 -- # return 0 00:05:37.899 20:12:52 -- event/cpu_locks.sh@159 -- # waitforlisten 58011 /var/tmp/spdk2.sock 00:05:37.899 20:12:52 -- common/autotest_common.sh@819 -- # '[' -z 58011 ']' 00:05:37.899 20:12:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.899 20:12:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.899 20:12:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.899 20:12:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.899 20:12:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.158 20:12:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.158 20:12:52 -- common/autotest_common.sh@852 -- # return 0 00:05:38.158 20:12:52 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:38.158 20:12:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.158 20:12:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.158 20:12:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.158 00:05:38.158 real 0m3.548s 00:05:38.158 user 0m1.314s 00:05:38.158 sys 0m0.163s 00:05:38.158 20:12:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.158 20:12:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.158 ************************************ 00:05:38.158 END TEST locking_overlapped_coremask_via_rpc 00:05:38.158 ************************************ 00:05:38.158 20:12:52 -- event/cpu_locks.sh@174 -- # cleanup 00:05:38.158 20:12:52 -- event/cpu_locks.sh@15 -- # [[ -z 57991 ]] 00:05:38.158 20:12:52 -- event/cpu_locks.sh@15 -- # killprocess 57991 00:05:38.158 20:12:52 -- common/autotest_common.sh@926 -- # '[' -z 57991 ']' 00:05:38.158 20:12:52 -- common/autotest_common.sh@930 -- # kill -0 57991 00:05:38.158 20:12:52 -- common/autotest_common.sh@931 -- # uname 00:05:38.158 20:12:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:38.158 20:12:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57991 00:05:38.158 20:12:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:38.158 20:12:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:38.158 20:12:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57991' 00:05:38.158 killing process with pid 57991 00:05:38.158 20:12:52 -- common/autotest_common.sh@945 -- # kill 57991 00:05:38.158 20:12:52 -- common/autotest_common.sh@950 -- # wait 57991 00:05:40.069 20:12:54 -- event/cpu_locks.sh@16 -- # [[ -z 58011 ]] 00:05:40.069 20:12:54 -- event/cpu_locks.sh@16 -- # killprocess 58011 00:05:40.069 20:12:54 -- common/autotest_common.sh@926 -- # '[' -z 58011 ']' 00:05:40.069 20:12:54 -- common/autotest_common.sh@930 -- # kill -0 58011 00:05:40.069 20:12:54 -- common/autotest_common.sh@931 -- # uname 00:05:40.069 20:12:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:40.069 20:12:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58011 00:05:40.069 killing process with pid 58011 00:05:40.069 20:12:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:40.069 20:12:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:40.069 20:12:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58011' 00:05:40.069 20:12:54 -- common/autotest_common.sh@945 -- # kill 58011 00:05:40.069 20:12:54 -- common/autotest_common.sh@950 -- # wait 58011 00:05:41.003 20:12:55 -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.003 20:12:55 -- event/cpu_locks.sh@1 -- # cleanup 00:05:41.003 20:12:55 -- event/cpu_locks.sh@15 -- # [[ -z 57991 ]] 00:05:41.003 20:12:55 -- event/cpu_locks.sh@15 -- # killprocess 57991 00:05:41.003 20:12:55 -- common/autotest_common.sh@926 -- # '[' -z 57991 ']' 00:05:41.003 20:12:55 -- common/autotest_common.sh@930 -- # kill -0 57991 00:05:41.003 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (57991) - No such process 00:05:41.003 Process with pid 57991 is not found 00:05:41.003 20:12:55 -- common/autotest_common.sh@953 -- # echo 'Process with pid 57991 is not found' 00:05:41.003 20:12:55 -- event/cpu_locks.sh@16 -- # [[ -z 58011 ]] 00:05:41.003 20:12:55 -- event/cpu_locks.sh@16 -- # killprocess 58011 00:05:41.003 20:12:55 -- common/autotest_common.sh@926 -- # '[' -z 58011 ']' 00:05:41.003 20:12:55 -- common/autotest_common.sh@930 -- # kill -0 58011 00:05:41.003 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58011) - No such process 00:05:41.003 Process with pid 58011 is not found 00:05:41.003 20:12:55 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58011 is not found' 00:05:41.003 20:12:55 -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.003 ************************************ 00:05:41.003 END TEST cpu_locks 00:05:41.003 ************************************ 00:05:41.003 00:05:41.003 real 0m34.862s 00:05:41.003 user 1m0.126s 00:05:41.003 sys 0m4.934s 00:05:41.003 20:12:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.003 20:12:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.003 ************************************ 00:05:41.003 END TEST event 00:05:41.003 ************************************ 00:05:41.003 00:05:41.003 real 1m0.046s 00:05:41.003 user 1m49.295s 00:05:41.003 sys 0m7.731s 00:05:41.003 20:12:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.003 20:12:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.003 20:12:55 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:41.003 20:12:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.003 20:12:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.003 20:12:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.003 ************************************ 00:05:41.003 START TEST thread 00:05:41.003 ************************************ 00:05:41.003 20:12:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:41.003 * Looking for test storage... 00:05:41.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:41.003 20:12:55 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.003 20:12:55 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:41.003 20:12:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.003 20:12:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.003 ************************************ 00:05:41.003 START TEST thread_poller_perf 00:05:41.003 ************************************ 00:05:41.003 20:12:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.003 [2024-10-16 20:12:55.887284] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:41.004 [2024-10-16 20:12:55.887443] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58164 ] 00:05:41.262 [2024-10-16 20:12:56.030375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.520 [2024-10-16 20:12:56.241738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.520 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:42.904 [2024-10-16T20:12:57.833Z] ====================================== 00:05:42.904 [2024-10-16T20:12:57.833Z] busy:2615099946 (cyc) 00:05:42.904 [2024-10-16T20:12:57.833Z] total_run_count: 290000 00:05:42.904 [2024-10-16T20:12:57.833Z] tsc_hz: 2600000000 (cyc) 00:05:42.904 [2024-10-16T20:12:57.833Z] ====================================== 00:05:42.904 [2024-10-16T20:12:57.833Z] poller_cost: 9017 (cyc), 3468 (nsec) 00:05:42.904 00:05:42.904 real 0m1.664s 00:05:42.904 user 0m1.469s 00:05:42.904 sys 0m0.086s 00:05:42.904 ************************************ 00:05:42.904 END TEST thread_poller_perf 00:05:42.904 ************************************ 00:05:42.904 20:12:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.904 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.905 20:12:57 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.905 20:12:57 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:42.905 20:12:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.905 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.905 ************************************ 00:05:42.905 START TEST thread_poller_perf 00:05:42.905 ************************************ 00:05:42.905 20:12:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.905 [2024-10-16 20:12:57.592590] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:42.905 [2024-10-16 20:12:57.592693] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58206 ] 00:05:42.905 [2024-10-16 20:12:57.742246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.166 [2024-10-16 20:12:57.967972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.166 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:44.565 [2024-10-16T20:12:59.494Z] ====================================== 00:05:44.565 [2024-10-16T20:12:59.494Z] busy:2604682532 (cyc) 00:05:44.565 [2024-10-16T20:12:59.494Z] total_run_count: 3968000 00:05:44.565 [2024-10-16T20:12:59.494Z] tsc_hz: 2600000000 (cyc) 00:05:44.565 [2024-10-16T20:12:59.494Z] ====================================== 00:05:44.565 [2024-10-16T20:12:59.494Z] poller_cost: 656 (cyc), 252 (nsec) 00:05:44.565 00:05:44.565 real 0m1.689s 00:05:44.565 user 0m1.499s 00:05:44.565 sys 0m0.080s 00:05:44.565 20:12:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.565 20:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.565 ************************************ 00:05:44.565 END TEST thread_poller_perf 00:05:44.565 ************************************ 00:05:44.565 20:12:59 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:44.565 00:05:44.565 real 0m3.494s 00:05:44.565 user 0m3.020s 00:05:44.565 sys 0m0.255s 00:05:44.565 20:12:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.565 20:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.565 ************************************ 00:05:44.565 END TEST thread 00:05:44.565 ************************************ 00:05:44.565 20:12:59 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:44.565 20:12:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.565 20:12:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.565 20:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.565 ************************************ 00:05:44.565 START TEST accel 00:05:44.565 ************************************ 00:05:44.565 20:12:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:44.565 * Looking for test storage... 00:05:44.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:44.565 20:12:59 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:44.565 20:12:59 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:44.565 20:12:59 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.565 20:12:59 -- accel/accel.sh@59 -- # spdk_tgt_pid=58286 00:05:44.565 20:12:59 -- accel/accel.sh@60 -- # waitforlisten 58286 00:05:44.565 20:12:59 -- common/autotest_common.sh@819 -- # '[' -z 58286 ']' 00:05:44.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.565 20:12:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.565 20:12:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.565 20:12:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.565 20:12:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.565 20:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.565 20:12:59 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:44.565 20:12:59 -- accel/accel.sh@58 -- # build_accel_config 00:05:44.565 20:12:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.565 20:12:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.565 20:12:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.565 20:12:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.565 20:12:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.565 20:12:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.565 20:12:59 -- accel/accel.sh@42 -- # jq -r . 00:05:44.565 [2024-10-16 20:12:59.446704] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:44.565 [2024-10-16 20:12:59.446790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58286 ] 00:05:44.836 [2024-10-16 20:12:59.587828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.096 [2024-10-16 20:12:59.800560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.096 [2024-10-16 20:12:59.800780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.038 20:13:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.038 20:13:00 -- common/autotest_common.sh@852 -- # return 0 00:05:46.038 20:13:00 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:46.038 20:13:00 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:46.038 20:13:00 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:46.038 20:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.038 20:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:46.038 20:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # IFS== 00:05:46.297 20:13:00 -- accel/accel.sh@64 -- # read -r opc module 00:05:46.297 20:13:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:46.297 20:13:00 -- accel/accel.sh@67 -- # killprocess 58286 00:05:46.297 20:13:00 -- common/autotest_common.sh@926 -- # '[' -z 58286 ']' 00:05:46.297 20:13:00 -- common/autotest_common.sh@930 -- # kill -0 58286 00:05:46.297 20:13:00 -- common/autotest_common.sh@931 -- # uname 00:05:46.297 20:13:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.297 20:13:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58286 00:05:46.297 killing process with pid 58286 00:05:46.297 20:13:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.297 20:13:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.297 20:13:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58286' 00:05:46.297 20:13:01 -- common/autotest_common.sh@945 -- # kill 58286 00:05:46.298 20:13:01 -- common/autotest_common.sh@950 -- # wait 58286 00:05:47.677 20:13:02 -- accel/accel.sh@68 -- # trap - ERR 00:05:47.677 20:13:02 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:47.677 20:13:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:47.677 20:13:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.677 20:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.677 20:13:02 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:47.677 20:13:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:47.677 20:13:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.677 20:13:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.677 20:13:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.677 20:13:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.677 20:13:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.677 20:13:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.677 20:13:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.677 20:13:02 -- accel/accel.sh@42 -- # jq -r . 00:05:47.677 20:13:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.677 20:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.937 20:13:02 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:47.937 20:13:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:47.937 20:13:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.937 20:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.937 ************************************ 00:05:47.937 START TEST accel_missing_filename 00:05:47.937 ************************************ 00:05:47.937 20:13:02 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:47.937 20:13:02 -- common/autotest_common.sh@640 -- # local es=0 00:05:47.937 20:13:02 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:47.937 20:13:02 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:47.937 20:13:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:47.937 20:13:02 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:47.937 20:13:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:47.937 20:13:02 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:47.937 20:13:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:47.937 20:13:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.937 20:13:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.937 20:13:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.937 20:13:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.937 20:13:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.937 20:13:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.937 20:13:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.937 20:13:02 -- accel/accel.sh@42 -- # jq -r . 00:05:47.937 [2024-10-16 20:13:02.661104] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:47.937 [2024-10-16 20:13:02.661205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58358 ] 00:05:47.937 [2024-10-16 20:13:02.807883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.195 [2024-10-16 20:13:02.990721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.195 [2024-10-16 20:13:03.116065] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.764 [2024-10-16 20:13:03.396066] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:48.764 A filename is required. 00:05:48.764 20:13:03 -- common/autotest_common.sh@643 -- # es=234 00:05:48.764 ************************************ 00:05:48.764 END TEST accel_missing_filename 00:05:48.764 ************************************ 00:05:48.764 20:13:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:48.764 20:13:03 -- common/autotest_common.sh@652 -- # es=106 00:05:48.764 20:13:03 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:48.764 20:13:03 -- common/autotest_common.sh@660 -- # es=1 00:05:48.764 20:13:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:48.764 00:05:48.764 real 0m0.999s 00:05:48.764 user 0m0.780s 00:05:48.764 sys 0m0.141s 00:05:48.764 20:13:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.764 20:13:03 -- common/autotest_common.sh@10 -- # set +x 00:05:48.764 20:13:03 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:48.764 20:13:03 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:48.764 20:13:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.764 20:13:03 -- common/autotest_common.sh@10 -- # set +x 00:05:48.764 ************************************ 00:05:48.764 START TEST accel_compress_verify 00:05:48.764 ************************************ 00:05:48.764 20:13:03 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:48.764 20:13:03 -- common/autotest_common.sh@640 -- # local es=0 00:05:48.764 20:13:03 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:48.764 20:13:03 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:48.764 20:13:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:48.764 20:13:03 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:48.764 20:13:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:48.764 20:13:03 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:48.764 20:13:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:48.764 20:13:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.764 20:13:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.764 20:13:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.764 20:13:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.764 20:13:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.764 20:13:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.764 20:13:03 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.764 20:13:03 -- accel/accel.sh@42 -- # jq -r . 00:05:49.023 [2024-10-16 20:13:03.697316] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:49.023 [2024-10-16 20:13:03.697419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58389 ] 00:05:49.023 [2024-10-16 20:13:03.843786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.282 [2024-10-16 20:13:04.003260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.282 [2024-10-16 20:13:04.127251] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.540 [2024-10-16 20:13:04.403795] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:49.820 00:05:49.820 Compression does not support the verify option, aborting. 00:05:49.820 20:13:04 -- common/autotest_common.sh@643 -- # es=161 00:05:49.820 20:13:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:49.820 ************************************ 00:05:49.820 END TEST accel_compress_verify 00:05:49.820 ************************************ 00:05:49.820 20:13:04 -- common/autotest_common.sh@652 -- # es=33 00:05:49.820 20:13:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:49.820 20:13:04 -- common/autotest_common.sh@660 -- # es=1 00:05:49.820 20:13:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:49.820 00:05:49.820 real 0m0.977s 00:05:49.820 user 0m0.763s 00:05:49.820 sys 0m0.139s 00:05:49.820 20:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.820 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.820 20:13:04 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:49.820 20:13:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:49.820 20:13:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.820 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.820 ************************************ 00:05:49.820 START TEST accel_wrong_workload 00:05:49.820 ************************************ 00:05:49.820 20:13:04 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:49.820 20:13:04 -- common/autotest_common.sh@640 -- # local es=0 00:05:49.820 20:13:04 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:49.820 20:13:04 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:49.820 20:13:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:49.820 20:13:04 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:49.820 20:13:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:49.821 20:13:04 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:49.821 20:13:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:49.821 20:13:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.821 20:13:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.821 20:13:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.821 20:13:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.821 20:13:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.821 20:13:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.821 20:13:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.821 20:13:04 -- accel/accel.sh@42 -- # jq -r . 00:05:49.821 Unsupported workload type: foobar 00:05:49.821 [2024-10-16 20:13:04.714461] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:49.821 accel_perf options: 00:05:49.821 [-h help message] 00:05:49.821 [-q queue depth per core] 00:05:49.821 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:49.821 [-T number of threads per core 00:05:49.821 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:49.821 [-t time in seconds] 00:05:49.821 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:49.821 [ dif_verify, , dif_generate, dif_generate_copy 00:05:49.821 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:49.821 [-l for compress/decompress workloads, name of uncompressed input file 00:05:49.821 [-S for crc32c workload, use this seed value (default 0) 00:05:49.821 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:49.821 [-f for fill workload, use this BYTE value (default 255) 00:05:49.821 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:49.821 [-y verify result if this switch is on] 00:05:49.821 [-a tasks to allocate per core (default: same value as -q)] 00:05:49.821 Can be used to spread operations across a wider range of memory. 00:05:49.821 20:13:04 -- common/autotest_common.sh@643 -- # es=1 00:05:49.821 20:13:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:49.821 20:13:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:49.821 20:13:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:49.821 00:05:49.821 real 0m0.047s 00:05:49.821 user 0m0.045s 00:05:49.821 sys 0m0.027s 00:05:49.821 20:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.821 ************************************ 00:05:49.821 END TEST accel_wrong_workload 00:05:49.821 ************************************ 00:05:49.821 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.082 20:13:04 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:50.082 20:13:04 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:50.082 20:13:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.082 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.082 ************************************ 00:05:50.082 START TEST accel_negative_buffers 00:05:50.082 ************************************ 00:05:50.082 20:13:04 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:50.082 20:13:04 -- common/autotest_common.sh@640 -- # local es=0 00:05:50.082 20:13:04 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:50.082 20:13:04 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:50.082 20:13:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:50.082 20:13:04 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:50.082 20:13:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:50.082 20:13:04 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:50.082 20:13:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:50.082 20:13:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.082 20:13:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.082 20:13:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.082 20:13:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.082 20:13:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.082 20:13:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.082 20:13:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.082 20:13:04 -- accel/accel.sh@42 -- # jq -r . 00:05:50.082 -x option must be non-negative. 00:05:50.082 [2024-10-16 20:13:04.800533] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:50.082 accel_perf options: 00:05:50.082 [-h help message] 00:05:50.082 [-q queue depth per core] 00:05:50.082 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:50.082 [-T number of threads per core 00:05:50.082 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:50.082 [-t time in seconds] 00:05:50.082 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:50.082 [ dif_verify, , dif_generate, dif_generate_copy 00:05:50.082 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:50.082 [-l for compress/decompress workloads, name of uncompressed input file 00:05:50.082 [-S for crc32c workload, use this seed value (default 0) 00:05:50.082 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:50.082 [-f for fill workload, use this BYTE value (default 255) 00:05:50.082 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:50.082 [-y verify result if this switch is on] 00:05:50.082 [-a tasks to allocate per core (default: same value as -q)] 00:05:50.082 Can be used to spread operations across a wider range of memory. 00:05:50.082 ************************************ 00:05:50.082 END TEST accel_negative_buffers 00:05:50.082 ************************************ 00:05:50.082 20:13:04 -- common/autotest_common.sh@643 -- # es=1 00:05:50.082 20:13:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:50.082 20:13:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:50.082 20:13:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:50.082 00:05:50.082 real 0m0.052s 00:05:50.082 user 0m0.047s 00:05:50.082 sys 0m0.033s 00:05:50.082 20:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.082 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.082 20:13:04 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:50.082 20:13:04 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:50.082 20:13:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.082 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.082 ************************************ 00:05:50.082 START TEST accel_crc32c 00:05:50.082 ************************************ 00:05:50.082 20:13:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:50.082 20:13:04 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.082 20:13:04 -- accel/accel.sh@17 -- # local accel_module 00:05:50.082 20:13:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:50.082 20:13:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:50.082 20:13:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.082 20:13:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.082 20:13:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.082 20:13:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.082 20:13:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.082 20:13:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.082 20:13:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.082 20:13:04 -- accel/accel.sh@42 -- # jq -r . 00:05:50.082 [2024-10-16 20:13:04.886840] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:50.082 [2024-10-16 20:13:04.886940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58456 ] 00:05:50.343 [2024-10-16 20:13:05.034525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.343 [2024-10-16 20:13:05.225882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.257 20:13:07 -- accel/accel.sh@18 -- # out=' 00:05:52.257 SPDK Configuration: 00:05:52.257 Core mask: 0x1 00:05:52.257 00:05:52.257 Accel Perf Configuration: 00:05:52.257 Workload Type: crc32c 00:05:52.257 CRC-32C seed: 32 00:05:52.257 Transfer size: 4096 bytes 00:05:52.257 Vector count 1 00:05:52.257 Module: software 00:05:52.257 Queue depth: 32 00:05:52.257 Allocate depth: 32 00:05:52.257 # threads/core: 1 00:05:52.258 Run time: 1 seconds 00:05:52.258 Verify: Yes 00:05:52.258 00:05:52.258 Running for 1 seconds... 00:05:52.258 00:05:52.258 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:52.258 ------------------------------------------------------------------------------------ 00:05:52.258 0,0 461152/s 1801 MiB/s 0 0 00:05:52.258 ==================================================================================== 00:05:52.258 Total 461152/s 1801 MiB/s 0 0' 00:05:52.258 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.258 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.258 20:13:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:52.258 20:13:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:52.258 20:13:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.258 20:13:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.258 20:13:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.258 20:13:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.258 20:13:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.258 20:13:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.258 20:13:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.258 20:13:07 -- accel/accel.sh@42 -- # jq -r . 00:05:52.258 [2024-10-16 20:13:07.064217] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:52.258 [2024-10-16 20:13:07.064660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58482 ] 00:05:52.517 [2024-10-16 20:13:07.230659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.778 [2024-10-16 20:13:07.471754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val= 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val= 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val=0x1 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val= 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val= 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val=crc32c 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.778 20:13:07 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val=32 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val= 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val=software 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.778 20:13:07 -- accel/accel.sh@23 -- # accel_module=software 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.778 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.778 20:13:07 -- accel/accel.sh@21 -- # val=32 00:05:52.778 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.779 20:13:07 -- accel/accel.sh@21 -- # val=32 00:05:52.779 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.779 20:13:07 -- accel/accel.sh@21 -- # val=1 00:05:52.779 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.779 20:13:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:52.779 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.779 20:13:07 -- accel/accel.sh@21 -- # val=Yes 00:05:52.779 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.779 20:13:07 -- accel/accel.sh@21 -- # val= 00:05:52.779 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.779 20:13:07 -- accel/accel.sh@21 -- # val= 00:05:52.779 20:13:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # IFS=: 00:05:52.779 20:13:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.692 20:13:09 -- accel/accel.sh@21 -- # val= 00:05:54.692 20:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.692 20:13:09 -- accel/accel.sh@21 -- # val= 00:05:54.692 20:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.692 20:13:09 -- accel/accel.sh@21 -- # val= 00:05:54.692 20:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.692 20:13:09 -- accel/accel.sh@21 -- # val= 00:05:54.692 20:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.692 20:13:09 -- accel/accel.sh@21 -- # val= 00:05:54.692 20:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.692 20:13:09 -- accel/accel.sh@21 -- # val= 00:05:54.692 20:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.692 20:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.692 ************************************ 00:05:54.692 END TEST accel_crc32c 00:05:54.692 ************************************ 00:05:54.692 20:13:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:54.692 20:13:09 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:54.692 20:13:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.692 00:05:54.692 real 0m4.268s 00:05:54.692 user 0m3.730s 00:05:54.692 sys 0m0.323s 00:05:54.692 20:13:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.692 20:13:09 -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 20:13:09 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:54.692 20:13:09 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:54.692 20:13:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.692 20:13:09 -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 ************************************ 00:05:54.692 START TEST accel_crc32c_C2 00:05:54.692 ************************************ 00:05:54.692 20:13:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:54.692 20:13:09 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.692 20:13:09 -- accel/accel.sh@17 -- # local accel_module 00:05:54.692 20:13:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:54.692 20:13:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:54.692 20:13:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.692 20:13:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.692 20:13:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.692 20:13:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.692 20:13:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.692 20:13:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.692 20:13:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.692 20:13:09 -- accel/accel.sh@42 -- # jq -r . 00:05:54.692 [2024-10-16 20:13:09.192501] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:54.692 [2024-10-16 20:13:09.192602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58523 ] 00:05:54.692 [2024-10-16 20:13:09.337407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.692 [2024-10-16 20:13:09.505296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.609 20:13:11 -- accel/accel.sh@18 -- # out=' 00:05:56.609 SPDK Configuration: 00:05:56.609 Core mask: 0x1 00:05:56.609 00:05:56.609 Accel Perf Configuration: 00:05:56.609 Workload Type: crc32c 00:05:56.609 CRC-32C seed: 0 00:05:56.609 Transfer size: 4096 bytes 00:05:56.609 Vector count 2 00:05:56.609 Module: software 00:05:56.609 Queue depth: 32 00:05:56.609 Allocate depth: 32 00:05:56.609 # threads/core: 1 00:05:56.609 Run time: 1 seconds 00:05:56.609 Verify: Yes 00:05:56.609 00:05:56.609 Running for 1 seconds... 00:05:56.609 00:05:56.609 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.609 ------------------------------------------------------------------------------------ 00:05:56.609 0,0 390304/s 3049 MiB/s 0 0 00:05:56.609 ==================================================================================== 00:05:56.609 Total 390304/s 1524 MiB/s 0 0' 00:05:56.609 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.609 20:13:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:56.609 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.609 20:13:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:56.609 20:13:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.609 20:13:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.609 20:13:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.609 20:13:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.609 20:13:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.609 20:13:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.609 20:13:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.609 20:13:11 -- accel/accel.sh@42 -- # jq -r . 00:05:56.609 [2024-10-16 20:13:11.281928] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:56.609 [2024-10-16 20:13:11.282032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58549 ] 00:05:56.609 [2024-10-16 20:13:11.429973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.871 [2024-10-16 20:13:11.602343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val= 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val= 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val=0x1 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val= 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val= 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val=crc32c 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val=0 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val= 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val=software 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val=32 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val=32 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val=1 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val=Yes 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val= 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:56.871 20:13:11 -- accel/accel.sh@21 -- # val= 00:05:56.871 20:13:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:56.871 20:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:58.771 20:13:13 -- accel/accel.sh@21 -- # val= 00:05:58.771 20:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.771 20:13:13 -- accel/accel.sh@21 -- # val= 00:05:58.771 20:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.771 20:13:13 -- accel/accel.sh@21 -- # val= 00:05:58.771 20:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.771 20:13:13 -- accel/accel.sh@21 -- # val= 00:05:58.771 20:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.771 20:13:13 -- accel/accel.sh@21 -- # val= 00:05:58.771 20:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.771 20:13:13 -- accel/accel.sh@21 -- # val= 00:05:58.771 20:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.771 20:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.771 20:13:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.771 20:13:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:58.771 ************************************ 00:05:58.771 END TEST accel_crc32c_C2 00:05:58.771 ************************************ 00:05:58.771 20:13:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.771 00:05:58.771 real 0m4.172s 00:05:58.771 user 0m3.731s 00:05:58.771 sys 0m0.233s 00:05:58.771 20:13:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.771 20:13:13 -- common/autotest_common.sh@10 -- # set +x 00:05:58.771 20:13:13 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:58.771 20:13:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:58.771 20:13:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.771 20:13:13 -- common/autotest_common.sh@10 -- # set +x 00:05:58.771 ************************************ 00:05:58.771 START TEST accel_copy 00:05:58.771 ************************************ 00:05:58.771 20:13:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:58.771 20:13:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.771 20:13:13 -- accel/accel.sh@17 -- # local accel_module 00:05:58.771 20:13:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:58.771 20:13:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:58.771 20:13:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.771 20:13:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.771 20:13:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.771 20:13:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.771 20:13:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.771 20:13:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.771 20:13:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.771 20:13:13 -- accel/accel.sh@42 -- # jq -r . 00:05:58.771 [2024-10-16 20:13:13.402812] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:58.771 [2024-10-16 20:13:13.402915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58595 ] 00:05:58.771 [2024-10-16 20:13:13.551176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.771 [2024-10-16 20:13:13.697704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.671 20:13:15 -- accel/accel.sh@18 -- # out=' 00:06:00.671 SPDK Configuration: 00:06:00.671 Core mask: 0x1 00:06:00.671 00:06:00.671 Accel Perf Configuration: 00:06:00.671 Workload Type: copy 00:06:00.671 Transfer size: 4096 bytes 00:06:00.671 Vector count 1 00:06:00.671 Module: software 00:06:00.671 Queue depth: 32 00:06:00.671 Allocate depth: 32 00:06:00.671 # threads/core: 1 00:06:00.671 Run time: 1 seconds 00:06:00.671 Verify: Yes 00:06:00.671 00:06:00.671 Running for 1 seconds... 00:06:00.671 00:06:00.671 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.671 ------------------------------------------------------------------------------------ 00:06:00.671 0,0 372480/s 1455 MiB/s 0 0 00:06:00.671 ==================================================================================== 00:06:00.671 Total 372480/s 1455 MiB/s 0 0' 00:06:00.671 20:13:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:00.671 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.671 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.671 20:13:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:00.671 20:13:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.671 20:13:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.671 20:13:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.671 20:13:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.671 20:13:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.671 20:13:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.671 20:13:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.671 20:13:15 -- accel/accel.sh@42 -- # jq -r . 00:06:00.671 [2024-10-16 20:13:15.309646] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:00.671 [2024-10-16 20:13:15.309752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58616 ] 00:06:00.671 [2024-10-16 20:13:15.454654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.929 [2024-10-16 20:13:15.624598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.929 20:13:15 -- accel/accel.sh@21 -- # val= 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val= 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val=0x1 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val= 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val= 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val=copy 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val= 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val=software 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val=32 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val=32 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val=1 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val=Yes 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val= 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:00.930 20:13:15 -- accel/accel.sh@21 -- # val= 00:06:00.930 20:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # IFS=: 00:06:00.930 20:13:15 -- accel/accel.sh@20 -- # read -r var val 00:06:02.858 20:13:17 -- accel/accel.sh@21 -- # val= 00:06:02.858 20:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.858 20:13:17 -- accel/accel.sh@21 -- # val= 00:06:02.858 20:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.858 20:13:17 -- accel/accel.sh@21 -- # val= 00:06:02.858 20:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.858 20:13:17 -- accel/accel.sh@21 -- # val= 00:06:02.858 20:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.858 20:13:17 -- accel/accel.sh@21 -- # val= 00:06:02.858 20:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.858 20:13:17 -- accel/accel.sh@21 -- # val= 00:06:02.858 20:13:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # IFS=: 00:06:02.858 20:13:17 -- accel/accel.sh@20 -- # read -r var val 00:06:02.858 20:13:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.858 20:13:17 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:02.858 20:13:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.858 00:06:02.858 real 0m3.957s 00:06:02.858 user 0m3.515s 00:06:02.858 sys 0m0.236s 00:06:02.858 20:13:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.858 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:06:02.858 ************************************ 00:06:02.858 END TEST accel_copy 00:06:02.858 ************************************ 00:06:02.858 20:13:17 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.858 20:13:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:02.858 20:13:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.858 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:06:02.858 ************************************ 00:06:02.858 START TEST accel_fill 00:06:02.858 ************************************ 00:06:02.859 20:13:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.859 20:13:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.859 20:13:17 -- accel/accel.sh@17 -- # local accel_module 00:06:02.859 20:13:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.859 20:13:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.859 20:13:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.859 20:13:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.859 20:13:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.859 20:13:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.859 20:13:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.859 20:13:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.859 20:13:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.859 20:13:17 -- accel/accel.sh@42 -- # jq -r . 00:06:02.859 [2024-10-16 20:13:17.401902] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:02.859 [2024-10-16 20:13:17.401982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58657 ] 00:06:02.859 [2024-10-16 20:13:17.542330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.859 [2024-10-16 20:13:17.721727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.760 20:13:19 -- accel/accel.sh@18 -- # out=' 00:06:04.760 SPDK Configuration: 00:06:04.760 Core mask: 0x1 00:06:04.760 00:06:04.760 Accel Perf Configuration: 00:06:04.760 Workload Type: fill 00:06:04.760 Fill pattern: 0x80 00:06:04.760 Transfer size: 4096 bytes 00:06:04.760 Vector count 1 00:06:04.760 Module: software 00:06:04.760 Queue depth: 64 00:06:04.760 Allocate depth: 64 00:06:04.760 # threads/core: 1 00:06:04.760 Run time: 1 seconds 00:06:04.760 Verify: Yes 00:06:04.760 00:06:04.760 Running for 1 seconds... 00:06:04.760 00:06:04.760 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.760 ------------------------------------------------------------------------------------ 00:06:04.760 0,0 456128/s 1781 MiB/s 0 0 00:06:04.760 ==================================================================================== 00:06:04.760 Total 456128/s 1781 MiB/s 0 0' 00:06:04.760 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:04.760 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:04.760 20:13:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:04.760 20:13:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:04.760 20:13:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.760 20:13:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.760 20:13:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.760 20:13:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.760 20:13:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.760 20:13:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.760 20:13:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.760 20:13:19 -- accel/accel.sh@42 -- # jq -r . 00:06:04.760 [2024-10-16 20:13:19.486376] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:04.760 [2024-10-16 20:13:19.486602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58683 ] 00:06:04.760 [2024-10-16 20:13:19.636803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.018 [2024-10-16 20:13:19.818202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val= 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val= 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val=0x1 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val= 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val= 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val=fill 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val=0x80 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val= 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val=software 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val=64 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val=64 00:06:05.276 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.276 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.276 20:13:19 -- accel/accel.sh@21 -- # val=1 00:06:05.277 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.277 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.277 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.277 20:13:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:05.277 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.277 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.277 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.277 20:13:19 -- accel/accel.sh@21 -- # val=Yes 00:06:05.277 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.277 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.277 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.277 20:13:19 -- accel/accel.sh@21 -- # val= 00:06:05.277 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.277 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.277 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.277 20:13:19 -- accel/accel.sh@21 -- # val= 00:06:05.277 20:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.277 20:13:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.277 20:13:19 -- accel/accel.sh@20 -- # read -r var val 00:06:06.651 20:13:21 -- accel/accel.sh@21 -- # val= 00:06:06.651 20:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.651 20:13:21 -- accel/accel.sh@21 -- # val= 00:06:06.651 20:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.651 20:13:21 -- accel/accel.sh@21 -- # val= 00:06:06.651 20:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.651 20:13:21 -- accel/accel.sh@21 -- # val= 00:06:06.651 20:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.651 20:13:21 -- accel/accel.sh@21 -- # val= 00:06:06.651 20:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.651 20:13:21 -- accel/accel.sh@21 -- # val= 00:06:06.651 20:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # IFS=: 00:06:06.651 20:13:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.651 20:13:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.651 20:13:21 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:06.651 20:13:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.651 00:06:06.651 real 0m4.127s 00:06:06.651 user 0m3.679s 00:06:06.651 sys 0m0.244s 00:06:06.651 20:13:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.651 ************************************ 00:06:06.651 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:06:06.651 END TEST accel_fill 00:06:06.651 ************************************ 00:06:06.651 20:13:21 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:06.651 20:13:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:06.651 20:13:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.651 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:06:06.651 ************************************ 00:06:06.651 START TEST accel_copy_crc32c 00:06:06.651 ************************************ 00:06:06.651 20:13:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:06.651 20:13:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.651 20:13:21 -- accel/accel.sh@17 -- # local accel_module 00:06:06.651 20:13:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:06.651 20:13:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:06.651 20:13:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.651 20:13:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.651 20:13:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.651 20:13:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.651 20:13:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.651 20:13:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.651 20:13:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.651 20:13:21 -- accel/accel.sh@42 -- # jq -r . 00:06:06.651 [2024-10-16 20:13:21.566732] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:06.652 [2024-10-16 20:13:21.566846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58730 ] 00:06:06.909 [2024-10-16 20:13:21.717065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.167 [2024-10-16 20:13:21.886353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.092 20:13:23 -- accel/accel.sh@18 -- # out=' 00:06:09.092 SPDK Configuration: 00:06:09.092 Core mask: 0x1 00:06:09.092 00:06:09.092 Accel Perf Configuration: 00:06:09.092 Workload Type: copy_crc32c 00:06:09.092 CRC-32C seed: 0 00:06:09.092 Vector size: 4096 bytes 00:06:09.092 Transfer size: 4096 bytes 00:06:09.092 Vector count 1 00:06:09.092 Module: software 00:06:09.092 Queue depth: 32 00:06:09.092 Allocate depth: 32 00:06:09.092 # threads/core: 1 00:06:09.092 Run time: 1 seconds 00:06:09.092 Verify: Yes 00:06:09.092 00:06:09.092 Running for 1 seconds... 00:06:09.092 00:06:09.092 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:09.092 ------------------------------------------------------------------------------------ 00:06:09.092 0,0 237632/s 928 MiB/s 0 0 00:06:09.092 ==================================================================================== 00:06:09.092 Total 237632/s 928 MiB/s 0 0' 00:06:09.092 20:13:23 -- accel/accel.sh@20 -- # IFS=: 00:06:09.092 20:13:23 -- accel/accel.sh@20 -- # read -r var val 00:06:09.092 20:13:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:09.092 20:13:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.092 20:13:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:09.092 20:13:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.092 20:13:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.092 20:13:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.092 20:13:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.092 20:13:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.092 20:13:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.092 20:13:23 -- accel/accel.sh@42 -- # jq -r . 00:06:09.092 [2024-10-16 20:13:23.655924] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:09.092 [2024-10-16 20:13:23.656178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58756 ] 00:06:09.092 [2024-10-16 20:13:23.795786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.092 [2024-10-16 20:13:23.966243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val= 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val= 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val=0x1 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val= 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val= 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val=0 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val= 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val=software 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val=32 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val=32 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val=1 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val=Yes 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val= 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.350 20:13:24 -- accel/accel.sh@21 -- # val= 00:06:09.350 20:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.350 20:13:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.722 20:13:25 -- accel/accel.sh@21 -- # val= 00:06:10.722 20:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.722 20:13:25 -- accel/accel.sh@21 -- # val= 00:06:10.722 20:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.722 20:13:25 -- accel/accel.sh@21 -- # val= 00:06:10.722 20:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.722 20:13:25 -- accel/accel.sh@21 -- # val= 00:06:10.722 20:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.722 20:13:25 -- accel/accel.sh@21 -- # val= 00:06:10.722 20:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.722 20:13:25 -- accel/accel.sh@21 -- # val= 00:06:10.722 20:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # IFS=: 00:06:10.722 20:13:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.722 20:13:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:10.722 20:13:25 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:10.722 20:13:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.722 00:06:10.722 real 0m4.037s 00:06:10.722 user 0m3.588s 00:06:10.722 sys 0m0.245s 00:06:10.722 20:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.722 20:13:25 -- common/autotest_common.sh@10 -- # set +x 00:06:10.722 ************************************ 00:06:10.722 END TEST accel_copy_crc32c 00:06:10.722 ************************************ 00:06:10.722 20:13:25 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:10.722 20:13:25 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:10.722 20:13:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.722 20:13:25 -- common/autotest_common.sh@10 -- # set +x 00:06:10.722 ************************************ 00:06:10.722 START TEST accel_copy_crc32c_C2 00:06:10.722 ************************************ 00:06:10.722 20:13:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:10.722 20:13:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.722 20:13:25 -- accel/accel.sh@17 -- # local accel_module 00:06:10.722 20:13:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:10.722 20:13:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:10.722 20:13:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.722 20:13:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.722 20:13:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.722 20:13:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.722 20:13:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.722 20:13:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.722 20:13:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.722 20:13:25 -- accel/accel.sh@42 -- # jq -r . 00:06:10.722 [2024-10-16 20:13:25.637704] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:10.722 [2024-10-16 20:13:25.637798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58797 ] 00:06:10.981 [2024-10-16 20:13:25.781216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.239 [2024-10-16 20:13:25.924967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.611 20:13:27 -- accel/accel.sh@18 -- # out=' 00:06:12.611 SPDK Configuration: 00:06:12.611 Core mask: 0x1 00:06:12.611 00:06:12.611 Accel Perf Configuration: 00:06:12.611 Workload Type: copy_crc32c 00:06:12.611 CRC-32C seed: 0 00:06:12.611 Vector size: 4096 bytes 00:06:12.611 Transfer size: 8192 bytes 00:06:12.611 Vector count 2 00:06:12.611 Module: software 00:06:12.611 Queue depth: 32 00:06:12.611 Allocate depth: 32 00:06:12.611 # threads/core: 1 00:06:12.611 Run time: 1 seconds 00:06:12.611 Verify: Yes 00:06:12.611 00:06:12.611 Running for 1 seconds... 00:06:12.611 00:06:12.611 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.612 ------------------------------------------------------------------------------------ 00:06:12.612 0,0 233248/s 1822 MiB/s 0 0 00:06:12.612 ==================================================================================== 00:06:12.612 Total 233248/s 911 MiB/s 0 0' 00:06:12.612 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:12.612 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:12.612 20:13:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:12.612 20:13:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:12.612 20:13:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.612 20:13:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.612 20:13:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.612 20:13:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.612 20:13:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.612 20:13:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.612 20:13:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.612 20:13:27 -- accel/accel.sh@42 -- # jq -r . 00:06:12.612 [2024-10-16 20:13:27.533582] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:12.612 [2024-10-16 20:13:27.533685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58817 ] 00:06:12.941 [2024-10-16 20:13:27.679464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.941 [2024-10-16 20:13:27.822281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val= 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val= 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val=0x1 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val= 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val= 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val=0 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val= 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val=software 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val=32 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val=32 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val=1 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val=Yes 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val= 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.221 20:13:27 -- accel/accel.sh@21 -- # val= 00:06:13.221 20:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.221 20:13:27 -- accel/accel.sh@20 -- # read -r var val 00:06:14.599 20:13:29 -- accel/accel.sh@21 -- # val= 00:06:14.599 20:13:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.599 20:13:29 -- accel/accel.sh@21 -- # val= 00:06:14.599 20:13:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.599 20:13:29 -- accel/accel.sh@21 -- # val= 00:06:14.599 20:13:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.599 20:13:29 -- accel/accel.sh@21 -- # val= 00:06:14.599 20:13:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.599 20:13:29 -- accel/accel.sh@21 -- # val= 00:06:14.599 20:13:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.599 20:13:29 -- accel/accel.sh@21 -- # val= 00:06:14.599 20:13:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # IFS=: 00:06:14.599 20:13:29 -- accel/accel.sh@20 -- # read -r var val 00:06:14.599 20:13:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.599 20:13:29 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:14.599 20:13:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.599 00:06:14.599 real 0m3.794s 00:06:14.599 user 0m3.377s 00:06:14.599 sys 0m0.212s 00:06:14.599 20:13:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.599 20:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:14.599 ************************************ 00:06:14.599 END TEST accel_copy_crc32c_C2 00:06:14.599 ************************************ 00:06:14.599 20:13:29 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:14.599 20:13:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:14.599 20:13:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.599 20:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:14.599 ************************************ 00:06:14.599 START TEST accel_dualcast 00:06:14.599 ************************************ 00:06:14.599 20:13:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:14.599 20:13:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.599 20:13:29 -- accel/accel.sh@17 -- # local accel_module 00:06:14.599 20:13:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:14.599 20:13:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:14.599 20:13:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.599 20:13:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.599 20:13:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.599 20:13:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.599 20:13:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.599 20:13:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.599 20:13:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.599 20:13:29 -- accel/accel.sh@42 -- # jq -r . 00:06:14.599 [2024-10-16 20:13:29.487733] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:14.599 [2024-10-16 20:13:29.487837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58858 ] 00:06:14.860 [2024-10-16 20:13:29.636946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.119 [2024-10-16 20:13:29.790699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.493 20:13:31 -- accel/accel.sh@18 -- # out=' 00:06:16.493 SPDK Configuration: 00:06:16.493 Core mask: 0x1 00:06:16.493 00:06:16.493 Accel Perf Configuration: 00:06:16.493 Workload Type: dualcast 00:06:16.493 Transfer size: 4096 bytes 00:06:16.493 Vector count 1 00:06:16.493 Module: software 00:06:16.493 Queue depth: 32 00:06:16.493 Allocate depth: 32 00:06:16.493 # threads/core: 1 00:06:16.493 Run time: 1 seconds 00:06:16.493 Verify: Yes 00:06:16.493 00:06:16.493 Running for 1 seconds... 00:06:16.493 00:06:16.493 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.493 ------------------------------------------------------------------------------------ 00:06:16.493 0,0 421376/s 1646 MiB/s 0 0 00:06:16.493 ==================================================================================== 00:06:16.493 Total 421376/s 1646 MiB/s 0 0' 00:06:16.493 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:16.493 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:16.493 20:13:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:16.493 20:13:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:16.493 20:13:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.493 20:13:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.493 20:13:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.493 20:13:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.493 20:13:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.493 20:13:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.493 20:13:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.493 20:13:31 -- accel/accel.sh@42 -- # jq -r . 00:06:16.751 [2024-10-16 20:13:31.431303] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:16.751 [2024-10-16 20:13:31.431409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58881 ] 00:06:16.751 [2024-10-16 20:13:31.579253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.010 [2024-10-16 20:13:31.736485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val= 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val= 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val=0x1 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val= 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val= 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val=dualcast 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val= 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val=software 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val=32 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val=32 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val=1 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val=Yes 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val= 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.010 20:13:31 -- accel/accel.sh@21 -- # val= 00:06:17.010 20:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.010 20:13:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.912 20:13:33 -- accel/accel.sh@21 -- # val= 00:06:18.912 20:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.912 20:13:33 -- accel/accel.sh@21 -- # val= 00:06:18.912 20:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.912 20:13:33 -- accel/accel.sh@21 -- # val= 00:06:18.912 20:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.912 20:13:33 -- accel/accel.sh@21 -- # val= 00:06:18.912 20:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.912 20:13:33 -- accel/accel.sh@21 -- # val= 00:06:18.912 20:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.912 20:13:33 -- accel/accel.sh@21 -- # val= 00:06:18.912 20:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.912 20:13:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.912 ************************************ 00:06:18.912 END TEST accel_dualcast 00:06:18.912 ************************************ 00:06:18.912 20:13:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.912 20:13:33 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:18.912 20:13:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.912 00:06:18.912 real 0m3.892s 00:06:18.912 user 0m3.458s 00:06:18.912 sys 0m0.228s 00:06:18.912 20:13:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.912 20:13:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.912 20:13:33 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:18.912 20:13:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:18.912 20:13:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.912 20:13:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.912 ************************************ 00:06:18.912 START TEST accel_compare 00:06:18.912 ************************************ 00:06:18.912 20:13:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:18.912 20:13:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.912 20:13:33 -- accel/accel.sh@17 -- # local accel_module 00:06:18.912 20:13:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:18.912 20:13:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:18.912 20:13:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.912 20:13:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.912 20:13:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.912 20:13:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.912 20:13:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.912 20:13:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.912 20:13:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.912 20:13:33 -- accel/accel.sh@42 -- # jq -r . 00:06:18.912 [2024-10-16 20:13:33.429380] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:18.912 [2024-10-16 20:13:33.429485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58922 ] 00:06:18.912 [2024-10-16 20:13:33.579229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.912 [2024-10-16 20:13:33.756148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.815 20:13:35 -- accel/accel.sh@18 -- # out=' 00:06:20.815 SPDK Configuration: 00:06:20.815 Core mask: 0x1 00:06:20.815 00:06:20.815 Accel Perf Configuration: 00:06:20.815 Workload Type: compare 00:06:20.815 Transfer size: 4096 bytes 00:06:20.815 Vector count 1 00:06:20.815 Module: software 00:06:20.815 Queue depth: 32 00:06:20.815 Allocate depth: 32 00:06:20.815 # threads/core: 1 00:06:20.815 Run time: 1 seconds 00:06:20.815 Verify: Yes 00:06:20.815 00:06:20.815 Running for 1 seconds... 00:06:20.815 00:06:20.815 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.815 ------------------------------------------------------------------------------------ 00:06:20.815 0,0 426528/s 1666 MiB/s 0 0 00:06:20.815 ==================================================================================== 00:06:20.815 Total 426528/s 1666 MiB/s 0 0' 00:06:20.815 20:13:35 -- accel/accel.sh@20 -- # IFS=: 00:06:20.815 20:13:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.815 20:13:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:20.815 20:13:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:20.815 20:13:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.815 20:13:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.815 20:13:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.815 20:13:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.815 20:13:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.815 20:13:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.815 20:13:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.815 20:13:35 -- accel/accel.sh@42 -- # jq -r . 00:06:20.815 [2024-10-16 20:13:35.539460] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:20.815 [2024-10-16 20:13:35.539697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58949 ] 00:06:20.815 [2024-10-16 20:13:35.689171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.076 [2024-10-16 20:13:35.917011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.338 20:13:36 -- accel/accel.sh@21 -- # val= 00:06:21.338 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.338 20:13:36 -- accel/accel.sh@21 -- # val= 00:06:21.338 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.338 20:13:36 -- accel/accel.sh@21 -- # val=0x1 00:06:21.338 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.338 20:13:36 -- accel/accel.sh@21 -- # val= 00:06:21.338 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.338 20:13:36 -- accel/accel.sh@21 -- # val= 00:06:21.338 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.338 20:13:36 -- accel/accel.sh@21 -- # val=compare 00:06:21.338 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.338 20:13:36 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.338 20:13:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.338 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.338 20:13:36 -- accel/accel.sh@21 -- # val= 00:06:21.338 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.338 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.339 20:13:36 -- accel/accel.sh@21 -- # val=software 00:06:21.339 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.339 20:13:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.339 20:13:36 -- accel/accel.sh@21 -- # val=32 00:06:21.339 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.339 20:13:36 -- accel/accel.sh@21 -- # val=32 00:06:21.339 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.339 20:13:36 -- accel/accel.sh@21 -- # val=1 00:06:21.339 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.339 20:13:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.339 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.339 20:13:36 -- accel/accel.sh@21 -- # val=Yes 00:06:21.339 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.339 20:13:36 -- accel/accel.sh@21 -- # val= 00:06:21.339 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.339 20:13:36 -- accel/accel.sh@21 -- # val= 00:06:21.339 20:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.339 20:13:36 -- accel/accel.sh@20 -- # read -r var val 00:06:22.722 20:13:37 -- accel/accel.sh@21 -- # val= 00:06:22.722 20:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.722 20:13:37 -- accel/accel.sh@21 -- # val= 00:06:22.722 20:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.722 20:13:37 -- accel/accel.sh@21 -- # val= 00:06:22.722 20:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.722 20:13:37 -- accel/accel.sh@21 -- # val= 00:06:22.722 20:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.722 20:13:37 -- accel/accel.sh@21 -- # val= 00:06:22.722 20:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.722 20:13:37 -- accel/accel.sh@21 -- # val= 00:06:22.722 20:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.722 20:13:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.722 20:13:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.722 20:13:37 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:22.722 20:13:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.722 00:06:22.722 real 0m4.228s 00:06:22.722 user 0m3.737s 00:06:22.722 sys 0m0.280s 00:06:22.722 20:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.722 20:13:37 -- common/autotest_common.sh@10 -- # set +x 00:06:22.722 ************************************ 00:06:22.722 END TEST accel_compare 00:06:22.722 ************************************ 00:06:22.983 20:13:37 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:22.983 20:13:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:22.983 20:13:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.983 20:13:37 -- common/autotest_common.sh@10 -- # set +x 00:06:22.983 ************************************ 00:06:22.983 START TEST accel_xor 00:06:22.983 ************************************ 00:06:22.983 20:13:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:22.983 20:13:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.983 20:13:37 -- accel/accel.sh@17 -- # local accel_module 00:06:22.983 20:13:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:22.983 20:13:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:22.983 20:13:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.983 20:13:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.983 20:13:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.983 20:13:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.983 20:13:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.983 20:13:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.983 20:13:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.983 20:13:37 -- accel/accel.sh@42 -- # jq -r . 00:06:22.983 [2024-10-16 20:13:37.713852] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:22.983 [2024-10-16 20:13:37.713932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58996 ] 00:06:22.983 [2024-10-16 20:13:37.852079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.243 [2024-10-16 20:13:37.995490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.157 20:13:39 -- accel/accel.sh@18 -- # out=' 00:06:25.157 SPDK Configuration: 00:06:25.157 Core mask: 0x1 00:06:25.157 00:06:25.157 Accel Perf Configuration: 00:06:25.157 Workload Type: xor 00:06:25.157 Source buffers: 2 00:06:25.157 Transfer size: 4096 bytes 00:06:25.157 Vector count 1 00:06:25.157 Module: software 00:06:25.157 Queue depth: 32 00:06:25.157 Allocate depth: 32 00:06:25.157 # threads/core: 1 00:06:25.157 Run time: 1 seconds 00:06:25.157 Verify: Yes 00:06:25.157 00:06:25.157 Running for 1 seconds... 00:06:25.157 00:06:25.157 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.157 ------------------------------------------------------------------------------------ 00:06:25.157 0,0 438112/s 1711 MiB/s 0 0 00:06:25.157 ==================================================================================== 00:06:25.157 Total 438112/s 1711 MiB/s 0 0' 00:06:25.157 20:13:39 -- accel/accel.sh@20 -- # IFS=: 00:06:25.157 20:13:39 -- accel/accel.sh@20 -- # read -r var val 00:06:25.157 20:13:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:25.157 20:13:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:25.158 20:13:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.158 20:13:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.158 20:13:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.158 20:13:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.158 20:13:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.158 20:13:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.158 20:13:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.158 20:13:39 -- accel/accel.sh@42 -- # jq -r . 00:06:25.158 [2024-10-16 20:13:39.621528] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:25.158 [2024-10-16 20:13:39.621627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59018 ] 00:06:25.158 [2024-10-16 20:13:39.764621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.158 [2024-10-16 20:13:39.909128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val= 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val= 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val=0x1 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val= 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val= 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val=xor 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val=2 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val= 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val=software 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val=32 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val=32 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val=1 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val=Yes 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val= 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.158 20:13:40 -- accel/accel.sh@21 -- # val= 00:06:25.158 20:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.158 20:13:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.581 20:13:41 -- accel/accel.sh@21 -- # val= 00:06:26.581 20:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.581 20:13:41 -- accel/accel.sh@21 -- # val= 00:06:26.581 20:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.581 20:13:41 -- accel/accel.sh@21 -- # val= 00:06:26.581 20:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.581 20:13:41 -- accel/accel.sh@21 -- # val= 00:06:26.581 20:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.581 20:13:41 -- accel/accel.sh@21 -- # val= 00:06:26.581 20:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.581 20:13:41 -- accel/accel.sh@21 -- # val= 00:06:26.581 20:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.581 20:13:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.581 ************************************ 00:06:26.581 END TEST accel_xor 00:06:26.581 ************************************ 00:06:26.581 20:13:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.581 20:13:41 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:26.581 20:13:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.581 00:06:26.581 real 0m3.813s 00:06:26.581 user 0m3.373s 00:06:26.581 sys 0m0.230s 00:06:26.581 20:13:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.581 20:13:41 -- common/autotest_common.sh@10 -- # set +x 00:06:26.839 20:13:41 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:26.839 20:13:41 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:26.839 20:13:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.839 20:13:41 -- common/autotest_common.sh@10 -- # set +x 00:06:26.839 ************************************ 00:06:26.839 START TEST accel_xor 00:06:26.839 ************************************ 00:06:26.839 20:13:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:26.839 20:13:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.839 20:13:41 -- accel/accel.sh@17 -- # local accel_module 00:06:26.839 20:13:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:26.839 20:13:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:26.839 20:13:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.839 20:13:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.839 20:13:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.839 20:13:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.839 20:13:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.839 20:13:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.839 20:13:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.839 20:13:41 -- accel/accel.sh@42 -- # jq -r . 00:06:26.839 [2024-10-16 20:13:41.585033] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:26.839 [2024-10-16 20:13:41.585147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59059 ] 00:06:26.839 [2024-10-16 20:13:41.733875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.097 [2024-10-16 20:13:41.909185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.010 20:13:43 -- accel/accel.sh@18 -- # out=' 00:06:29.010 SPDK Configuration: 00:06:29.010 Core mask: 0x1 00:06:29.010 00:06:29.010 Accel Perf Configuration: 00:06:29.010 Workload Type: xor 00:06:29.010 Source buffers: 3 00:06:29.010 Transfer size: 4096 bytes 00:06:29.010 Vector count 1 00:06:29.010 Module: software 00:06:29.010 Queue depth: 32 00:06:29.010 Allocate depth: 32 00:06:29.010 # threads/core: 1 00:06:29.010 Run time: 1 seconds 00:06:29.010 Verify: Yes 00:06:29.010 00:06:29.010 Running for 1 seconds... 00:06:29.010 00:06:29.010 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.010 ------------------------------------------------------------------------------------ 00:06:29.010 0,0 323200/s 1262 MiB/s 0 0 00:06:29.010 ==================================================================================== 00:06:29.010 Total 323200/s 1262 MiB/s 0 0' 00:06:29.010 20:13:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.010 20:13:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.010 20:13:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:29.010 20:13:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:29.010 20:13:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.010 20:13:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.010 20:13:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.010 20:13:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.010 20:13:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.010 20:13:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.010 20:13:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.010 20:13:43 -- accel/accel.sh@42 -- # jq -r . 00:06:29.010 [2024-10-16 20:13:43.616669] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:29.010 [2024-10-16 20:13:43.616902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59085 ] 00:06:29.010 [2024-10-16 20:13:43.763357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.010 [2024-10-16 20:13:43.915624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val= 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val= 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val=0x1 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val= 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val= 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val=xor 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val=3 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val= 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val=software 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val=32 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val=32 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val=1 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val=Yes 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val= 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:29.272 20:13:44 -- accel/accel.sh@21 -- # val= 00:06:29.272 20:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # IFS=: 00:06:29.272 20:13:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.659 20:13:45 -- accel/accel.sh@21 -- # val= 00:06:30.659 20:13:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.659 20:13:45 -- accel/accel.sh@21 -- # val= 00:06:30.659 20:13:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.659 20:13:45 -- accel/accel.sh@21 -- # val= 00:06:30.659 20:13:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.659 20:13:45 -- accel/accel.sh@21 -- # val= 00:06:30.659 20:13:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.659 20:13:45 -- accel/accel.sh@21 -- # val= 00:06:30.659 20:13:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.659 20:13:45 -- accel/accel.sh@21 -- # val= 00:06:30.659 20:13:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # IFS=: 00:06:30.659 20:13:45 -- accel/accel.sh@20 -- # read -r var val 00:06:30.659 ************************************ 00:06:30.659 END TEST accel_xor 00:06:30.659 ************************************ 00:06:30.659 20:13:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.659 20:13:45 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:30.659 20:13:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.659 00:06:30.659 real 0m3.946s 00:06:30.659 user 0m3.494s 00:06:30.659 sys 0m0.244s 00:06:30.659 20:13:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.659 20:13:45 -- common/autotest_common.sh@10 -- # set +x 00:06:30.659 20:13:45 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:30.659 20:13:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:30.659 20:13:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.659 20:13:45 -- common/autotest_common.sh@10 -- # set +x 00:06:30.659 ************************************ 00:06:30.659 START TEST accel_dif_verify 00:06:30.659 ************************************ 00:06:30.659 20:13:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:30.659 20:13:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.659 20:13:45 -- accel/accel.sh@17 -- # local accel_module 00:06:30.659 20:13:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:30.659 20:13:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:30.659 20:13:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.659 20:13:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.659 20:13:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.659 20:13:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.659 20:13:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.659 20:13:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.659 20:13:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.659 20:13:45 -- accel/accel.sh@42 -- # jq -r . 00:06:30.920 [2024-10-16 20:13:45.588324] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:30.920 [2024-10-16 20:13:45.588431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59126 ] 00:06:30.920 [2024-10-16 20:13:45.735354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.182 [2024-10-16 20:13:45.876886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.562 20:13:47 -- accel/accel.sh@18 -- # out=' 00:06:32.562 SPDK Configuration: 00:06:32.562 Core mask: 0x1 00:06:32.562 00:06:32.562 Accel Perf Configuration: 00:06:32.562 Workload Type: dif_verify 00:06:32.562 Vector size: 4096 bytes 00:06:32.562 Transfer size: 4096 bytes 00:06:32.562 Block size: 512 bytes 00:06:32.562 Metadata size: 8 bytes 00:06:32.562 Vector count 1 00:06:32.562 Module: software 00:06:32.562 Queue depth: 32 00:06:32.562 Allocate depth: 32 00:06:32.562 # threads/core: 1 00:06:32.562 Run time: 1 seconds 00:06:32.562 Verify: No 00:06:32.562 00:06:32.562 Running for 1 seconds... 00:06:32.562 00:06:32.562 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.562 ------------------------------------------------------------------------------------ 00:06:32.562 0,0 128768/s 510 MiB/s 0 0 00:06:32.562 ==================================================================================== 00:06:32.562 Total 128768/s 503 MiB/s 0 0' 00:06:32.562 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.562 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.562 20:13:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:32.562 20:13:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:32.562 20:13:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.562 20:13:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.562 20:13:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.562 20:13:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.562 20:13:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.562 20:13:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.562 20:13:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.562 20:13:47 -- accel/accel.sh@42 -- # jq -r . 00:06:32.821 [2024-10-16 20:13:47.496345] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:32.821 [2024-10-16 20:13:47.496448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59152 ] 00:06:32.821 [2024-10-16 20:13:47.641683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.082 [2024-10-16 20:13:47.780956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val= 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val= 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val=0x1 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val= 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val= 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val=dif_verify 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val= 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val=software 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val=32 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val=32 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val=1 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val=No 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val= 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.082 20:13:47 -- accel/accel.sh@21 -- # val= 00:06:33.082 20:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.082 20:13:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.524 20:13:49 -- accel/accel.sh@21 -- # val= 00:06:34.524 20:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # IFS=: 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # read -r var val 00:06:34.524 20:13:49 -- accel/accel.sh@21 -- # val= 00:06:34.524 20:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # IFS=: 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # read -r var val 00:06:34.524 20:13:49 -- accel/accel.sh@21 -- # val= 00:06:34.524 20:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # IFS=: 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # read -r var val 00:06:34.524 20:13:49 -- accel/accel.sh@21 -- # val= 00:06:34.524 20:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # IFS=: 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # read -r var val 00:06:34.524 20:13:49 -- accel/accel.sh@21 -- # val= 00:06:34.524 20:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # IFS=: 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # read -r var val 00:06:34.524 20:13:49 -- accel/accel.sh@21 -- # val= 00:06:34.524 20:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # IFS=: 00:06:34.524 20:13:49 -- accel/accel.sh@20 -- # read -r var val 00:06:34.524 20:13:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.524 20:13:49 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:34.524 20:13:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.524 00:06:34.524 real 0m3.810s 00:06:34.524 user 0m3.375s 00:06:34.524 sys 0m0.233s 00:06:34.524 ************************************ 00:06:34.524 END TEST accel_dif_verify 00:06:34.524 ************************************ 00:06:34.524 20:13:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.524 20:13:49 -- common/autotest_common.sh@10 -- # set +x 00:06:34.524 20:13:49 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:34.524 20:13:49 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:34.524 20:13:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.524 20:13:49 -- common/autotest_common.sh@10 -- # set +x 00:06:34.524 ************************************ 00:06:34.524 START TEST accel_dif_generate 00:06:34.524 ************************************ 00:06:34.524 20:13:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:34.524 20:13:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.524 20:13:49 -- accel/accel.sh@17 -- # local accel_module 00:06:34.524 20:13:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:34.524 20:13:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:34.524 20:13:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.524 20:13:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.524 20:13:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.524 20:13:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.524 20:13:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.524 20:13:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.524 20:13:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.524 20:13:49 -- accel/accel.sh@42 -- # jq -r . 00:06:34.785 [2024-10-16 20:13:49.463221] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:34.785 [2024-10-16 20:13:49.463329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59193 ] 00:06:34.785 [2024-10-16 20:13:49.611074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.046 [2024-10-16 20:13:49.751757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.430 20:13:51 -- accel/accel.sh@18 -- # out=' 00:06:36.430 SPDK Configuration: 00:06:36.430 Core mask: 0x1 00:06:36.430 00:06:36.430 Accel Perf Configuration: 00:06:36.430 Workload Type: dif_generate 00:06:36.430 Vector size: 4096 bytes 00:06:36.430 Transfer size: 4096 bytes 00:06:36.430 Block size: 512 bytes 00:06:36.430 Metadata size: 8 bytes 00:06:36.430 Vector count 1 00:06:36.430 Module: software 00:06:36.430 Queue depth: 32 00:06:36.430 Allocate depth: 32 00:06:36.430 # threads/core: 1 00:06:36.430 Run time: 1 seconds 00:06:36.430 Verify: No 00:06:36.430 00:06:36.430 Running for 1 seconds... 00:06:36.430 00:06:36.430 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.430 ------------------------------------------------------------------------------------ 00:06:36.430 0,0 154080/s 611 MiB/s 0 0 00:06:36.430 ==================================================================================== 00:06:36.430 Total 154080/s 601 MiB/s 0 0' 00:06:36.430 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.430 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.430 20:13:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:36.430 20:13:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:36.430 20:13:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.430 20:13:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.430 20:13:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.430 20:13:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.430 20:13:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.430 20:13:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.430 20:13:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.430 20:13:51 -- accel/accel.sh@42 -- # jq -r . 00:06:36.690 [2024-10-16 20:13:51.377930] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:36.690 [2024-10-16 20:13:51.378171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59219 ] 00:06:36.690 [2024-10-16 20:13:51.525494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.951 [2024-10-16 20:13:51.664571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val= 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val= 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val=0x1 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val= 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val= 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val=dif_generate 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val= 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val=software 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val=32 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val=32 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val=1 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val=No 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val= 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.951 20:13:51 -- accel/accel.sh@21 -- # val= 00:06:36.951 20:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.951 20:13:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.337 20:13:53 -- accel/accel.sh@21 -- # val= 00:06:38.337 20:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:38.337 20:13:53 -- accel/accel.sh@21 -- # val= 00:06:38.337 20:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:38.337 20:13:53 -- accel/accel.sh@21 -- # val= 00:06:38.337 20:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:38.337 20:13:53 -- accel/accel.sh@21 -- # val= 00:06:38.337 20:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:38.337 20:13:53 -- accel/accel.sh@21 -- # val= 00:06:38.337 20:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:38.337 20:13:53 -- accel/accel.sh@21 -- # val= 00:06:38.337 20:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:38.337 20:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:38.337 20:13:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.337 20:13:53 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:38.337 20:13:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.337 00:06:38.337 real 0m3.816s 00:06:38.337 user 0m3.384s 00:06:38.337 sys 0m0.227s 00:06:38.337 20:13:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.337 20:13:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.337 ************************************ 00:06:38.337 END TEST accel_dif_generate 00:06:38.337 ************************************ 00:06:38.598 20:13:53 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:38.598 20:13:53 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:38.598 20:13:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.598 20:13:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.598 ************************************ 00:06:38.598 START TEST accel_dif_generate_copy 00:06:38.598 ************************************ 00:06:38.598 20:13:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:38.598 20:13:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.598 20:13:53 -- accel/accel.sh@17 -- # local accel_module 00:06:38.598 20:13:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:38.598 20:13:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:38.598 20:13:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.598 20:13:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.598 20:13:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.598 20:13:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.598 20:13:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.598 20:13:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.598 20:13:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.598 20:13:53 -- accel/accel.sh@42 -- # jq -r . 00:06:38.598 [2024-10-16 20:13:53.313420] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:38.598 [2024-10-16 20:13:53.313615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59259 ] 00:06:38.598 [2024-10-16 20:13:53.459399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.858 [2024-10-16 20:13:53.599082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.771 20:13:55 -- accel/accel.sh@18 -- # out=' 00:06:40.771 SPDK Configuration: 00:06:40.771 Core mask: 0x1 00:06:40.771 00:06:40.771 Accel Perf Configuration: 00:06:40.771 Workload Type: dif_generate_copy 00:06:40.771 Vector size: 4096 bytes 00:06:40.771 Transfer size: 4096 bytes 00:06:40.771 Vector count 1 00:06:40.771 Module: software 00:06:40.771 Queue depth: 32 00:06:40.771 Allocate depth: 32 00:06:40.771 # threads/core: 1 00:06:40.771 Run time: 1 seconds 00:06:40.771 Verify: No 00:06:40.771 00:06:40.772 Running for 1 seconds... 00:06:40.772 00:06:40.772 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.772 ------------------------------------------------------------------------------------ 00:06:40.772 0,0 118176/s 468 MiB/s 0 0 00:06:40.772 ==================================================================================== 00:06:40.772 Total 118176/s 461 MiB/s 0 0' 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:40.772 20:13:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:40.772 20:13:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.772 20:13:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.772 20:13:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.772 20:13:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.772 20:13:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.772 20:13:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.772 20:13:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.772 20:13:55 -- accel/accel.sh@42 -- # jq -r . 00:06:40.772 [2024-10-16 20:13:55.212238] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:40.772 [2024-10-16 20:13:55.212448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59281 ] 00:06:40.772 [2024-10-16 20:13:55.359324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.772 [2024-10-16 20:13:55.499680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val= 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val= 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val=0x1 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val= 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val= 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val= 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val=software 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val=32 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val=32 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val=1 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val=No 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val= 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:40.772 20:13:55 -- accel/accel.sh@21 -- # val= 00:06:40.772 20:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:40.772 20:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.157 20:13:57 -- accel/accel.sh@21 -- # val= 00:06:42.157 20:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:42.157 20:13:57 -- accel/accel.sh@21 -- # val= 00:06:42.157 20:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:42.157 20:13:57 -- accel/accel.sh@21 -- # val= 00:06:42.157 20:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:42.157 20:13:57 -- accel/accel.sh@21 -- # val= 00:06:42.157 20:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:42.157 20:13:57 -- accel/accel.sh@21 -- # val= 00:06:42.157 20:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:42.157 20:13:57 -- accel/accel.sh@21 -- # val= 00:06:42.157 20:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # IFS=: 00:06:42.157 20:13:57 -- accel/accel.sh@20 -- # read -r var val 00:06:42.157 20:13:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.157 20:13:57 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:42.157 20:13:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.157 00:06:42.157 real 0m3.798s 00:06:42.157 user 0m3.365s 00:06:42.157 sys 0m0.230s 00:06:42.157 20:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.157 20:13:57 -- common/autotest_common.sh@10 -- # set +x 00:06:42.157 ************************************ 00:06:42.157 END TEST accel_dif_generate_copy 00:06:42.157 ************************************ 00:06:42.418 20:13:57 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:42.418 20:13:57 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.418 20:13:57 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:42.418 20:13:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.418 20:13:57 -- common/autotest_common.sh@10 -- # set +x 00:06:42.418 ************************************ 00:06:42.418 START TEST accel_comp 00:06:42.418 ************************************ 00:06:42.418 20:13:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.418 20:13:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.418 20:13:57 -- accel/accel.sh@17 -- # local accel_module 00:06:42.418 20:13:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.418 20:13:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.418 20:13:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.418 20:13:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.418 20:13:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.418 20:13:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.418 20:13:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.418 20:13:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.418 20:13:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.418 20:13:57 -- accel/accel.sh@42 -- # jq -r . 00:06:42.418 [2024-10-16 20:13:57.153295] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:42.419 [2024-10-16 20:13:57.153396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59316 ] 00:06:42.419 [2024-10-16 20:13:57.299337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.677 [2024-10-16 20:13:57.440635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.580 20:13:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:44.580 00:06:44.580 SPDK Configuration: 00:06:44.580 Core mask: 0x1 00:06:44.580 00:06:44.580 Accel Perf Configuration: 00:06:44.580 Workload Type: compress 00:06:44.580 Transfer size: 4096 bytes 00:06:44.580 Vector count 1 00:06:44.580 Module: software 00:06:44.580 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:44.580 Queue depth: 32 00:06:44.580 Allocate depth: 32 00:06:44.580 # threads/core: 1 00:06:44.580 Run time: 1 seconds 00:06:44.580 Verify: No 00:06:44.580 00:06:44.580 Running for 1 seconds... 00:06:44.580 00:06:44.580 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.580 ------------------------------------------------------------------------------------ 00:06:44.580 0,0 64288/s 268 MiB/s 0 0 00:06:44.580 ==================================================================================== 00:06:44.580 Total 64288/s 251 MiB/s 0 0' 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:44.580 20:13:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:44.580 20:13:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.580 20:13:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.580 20:13:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.580 20:13:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.580 20:13:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.580 20:13:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.580 20:13:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.580 20:13:59 -- accel/accel.sh@42 -- # jq -r . 00:06:44.580 [2024-10-16 20:13:59.060149] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:44.580 [2024-10-16 20:13:59.060249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59342 ] 00:06:44.580 [2024-10-16 20:13:59.207521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.580 [2024-10-16 20:13:59.344602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val= 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val= 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val= 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val=0x1 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val= 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val= 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val=compress 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val= 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val=software 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val=32 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val=32 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val=1 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val=No 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val= 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:44.580 20:13:59 -- accel/accel.sh@21 -- # val= 00:06:44.580 20:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:44.580 20:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.483 20:14:00 -- accel/accel.sh@21 -- # val= 00:06:46.483 20:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.483 20:14:00 -- accel/accel.sh@21 -- # val= 00:06:46.483 20:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.483 20:14:00 -- accel/accel.sh@21 -- # val= 00:06:46.483 20:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.483 20:14:00 -- accel/accel.sh@21 -- # val= 00:06:46.483 20:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.483 20:14:00 -- accel/accel.sh@21 -- # val= 00:06:46.483 20:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.483 20:14:00 -- accel/accel.sh@21 -- # val= 00:06:46.483 20:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.483 20:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.483 20:14:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.483 20:14:00 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:46.483 20:14:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.483 00:06:46.483 real 0m3.816s 00:06:46.483 user 0m3.367s 00:06:46.483 sys 0m0.246s 00:06:46.483 ************************************ 00:06:46.483 END TEST accel_comp 00:06:46.483 ************************************ 00:06:46.483 20:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.483 20:14:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.483 20:14:00 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:46.483 20:14:00 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:46.483 20:14:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.483 20:14:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.483 ************************************ 00:06:46.484 START TEST accel_decomp 00:06:46.484 ************************************ 00:06:46.484 20:14:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:46.484 20:14:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.484 20:14:00 -- accel/accel.sh@17 -- # local accel_module 00:06:46.484 20:14:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:46.484 20:14:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:46.484 20:14:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.484 20:14:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.484 20:14:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.484 20:14:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.484 20:14:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.484 20:14:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.484 20:14:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.484 20:14:00 -- accel/accel.sh@42 -- # jq -r . 00:06:46.484 [2024-10-16 20:14:01.009710] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:46.484 [2024-10-16 20:14:01.009812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59383 ] 00:06:46.484 [2024-10-16 20:14:01.155555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.484 [2024-10-16 20:14:01.295964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.386 20:14:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:48.386 00:06:48.386 SPDK Configuration: 00:06:48.386 Core mask: 0x1 00:06:48.386 00:06:48.386 Accel Perf Configuration: 00:06:48.386 Workload Type: decompress 00:06:48.386 Transfer size: 4096 bytes 00:06:48.386 Vector count 1 00:06:48.386 Module: software 00:06:48.386 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.386 Queue depth: 32 00:06:48.386 Allocate depth: 32 00:06:48.386 # threads/core: 1 00:06:48.386 Run time: 1 seconds 00:06:48.386 Verify: Yes 00:06:48.386 00:06:48.386 Running for 1 seconds... 00:06:48.386 00:06:48.386 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.386 ------------------------------------------------------------------------------------ 00:06:48.386 0,0 81312/s 149 MiB/s 0 0 00:06:48.386 ==================================================================================== 00:06:48.386 Total 81312/s 317 MiB/s 0 0' 00:06:48.386 20:14:02 -- accel/accel.sh@20 -- # IFS=: 00:06:48.386 20:14:02 -- accel/accel.sh@20 -- # read -r var val 00:06:48.386 20:14:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.386 20:14:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.386 20:14:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.386 20:14:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.386 20:14:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.386 20:14:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.386 20:14:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.386 20:14:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.386 20:14:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.386 20:14:02 -- accel/accel.sh@42 -- # jq -r . 00:06:48.386 [2024-10-16 20:14:02.911197] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:48.386 [2024-10-16 20:14:02.911297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59409 ] 00:06:48.386 [2024-10-16 20:14:03.059036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.386 [2024-10-16 20:14:03.198530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.386 20:14:03 -- accel/accel.sh@21 -- # val= 00:06:48.386 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.386 20:14:03 -- accel/accel.sh@21 -- # val= 00:06:48.386 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.386 20:14:03 -- accel/accel.sh@21 -- # val= 00:06:48.386 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.386 20:14:03 -- accel/accel.sh@21 -- # val=0x1 00:06:48.386 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.386 20:14:03 -- accel/accel.sh@21 -- # val= 00:06:48.386 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.386 20:14:03 -- accel/accel.sh@21 -- # val= 00:06:48.386 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.386 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.386 20:14:03 -- accel/accel.sh@21 -- # val=decompress 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val= 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val=software 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val=32 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val=32 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val=1 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val=Yes 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val= 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:48.644 20:14:03 -- accel/accel.sh@21 -- # val= 00:06:48.644 20:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:48.644 20:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.024 20:14:04 -- accel/accel.sh@21 -- # val= 00:06:50.024 20:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.024 20:14:04 -- accel/accel.sh@21 -- # val= 00:06:50.024 20:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.024 20:14:04 -- accel/accel.sh@21 -- # val= 00:06:50.024 20:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.024 20:14:04 -- accel/accel.sh@21 -- # val= 00:06:50.024 20:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.024 20:14:04 -- accel/accel.sh@21 -- # val= 00:06:50.024 20:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.024 20:14:04 -- accel/accel.sh@21 -- # val= 00:06:50.024 20:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.024 20:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.024 20:14:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.024 20:14:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:50.024 20:14:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.024 00:06:50.024 real 0m3.799s 00:06:50.024 user 0m3.384s 00:06:50.024 sys 0m0.213s 00:06:50.024 20:14:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.024 20:14:04 -- common/autotest_common.sh@10 -- # set +x 00:06:50.024 ************************************ 00:06:50.024 END TEST accel_decomp 00:06:50.024 ************************************ 00:06:50.024 20:14:04 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:50.024 20:14:04 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:50.024 20:14:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.024 20:14:04 -- common/autotest_common.sh@10 -- # set +x 00:06:50.024 ************************************ 00:06:50.024 START TEST accel_decmop_full 00:06:50.024 ************************************ 00:06:50.024 20:14:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:50.024 20:14:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.024 20:14:04 -- accel/accel.sh@17 -- # local accel_module 00:06:50.024 20:14:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:50.024 20:14:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:50.024 20:14:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.024 20:14:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.024 20:14:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.024 20:14:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.024 20:14:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.024 20:14:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.024 20:14:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.024 20:14:04 -- accel/accel.sh@42 -- # jq -r . 00:06:50.024 [2024-10-16 20:14:04.841338] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:50.024 [2024-10-16 20:14:04.841438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59450 ] 00:06:50.285 [2024-10-16 20:14:04.987189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.285 [2024-10-16 20:14:05.126134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.238 20:14:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:52.238 00:06:52.238 SPDK Configuration: 00:06:52.238 Core mask: 0x1 00:06:52.238 00:06:52.238 Accel Perf Configuration: 00:06:52.238 Workload Type: decompress 00:06:52.238 Transfer size: 111250 bytes 00:06:52.238 Vector count 1 00:06:52.238 Module: software 00:06:52.238 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.238 Queue depth: 32 00:06:52.238 Allocate depth: 32 00:06:52.238 # threads/core: 1 00:06:52.238 Run time: 1 seconds 00:06:52.238 Verify: Yes 00:06:52.238 00:06:52.238 Running for 1 seconds... 00:06:52.238 00:06:52.238 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.238 ------------------------------------------------------------------------------------ 00:06:52.238 0,0 5632/s 232 MiB/s 0 0 00:06:52.238 ==================================================================================== 00:06:52.238 Total 5632/s 597 MiB/s 0 0' 00:06:52.238 20:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:52.238 20:14:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:52.238 20:14:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.238 20:14:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.238 20:14:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.238 20:14:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.238 20:14:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.238 20:14:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.238 20:14:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.238 20:14:06 -- accel/accel.sh@42 -- # jq -r . 00:06:52.238 [2024-10-16 20:14:06.752397] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:52.238 [2024-10-16 20:14:06.752498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59471 ] 00:06:52.238 [2024-10-16 20:14:06.898299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.238 [2024-10-16 20:14:07.039070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val= 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val= 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val= 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val=0x1 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val= 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val= 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val=decompress 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val= 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val=software 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val=32 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val=32 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val=1 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val=Yes 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val= 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 20:14:07 -- accel/accel.sh@21 -- # val= 00:06:52.238 20:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 20:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:54.160 20:14:08 -- accel/accel.sh@21 -- # val= 00:06:54.160 20:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.160 20:14:08 -- accel/accel.sh@21 -- # val= 00:06:54.160 20:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.160 20:14:08 -- accel/accel.sh@21 -- # val= 00:06:54.160 20:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.160 20:14:08 -- accel/accel.sh@21 -- # val= 00:06:54.160 20:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.160 20:14:08 -- accel/accel.sh@21 -- # val= 00:06:54.160 20:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.160 20:14:08 -- accel/accel.sh@21 -- # val= 00:06:54.160 20:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.160 20:14:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.160 20:14:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.160 20:14:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:54.160 20:14:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.160 00:06:54.160 real 0m3.836s 00:06:54.160 user 0m3.400s 00:06:54.160 sys 0m0.230s 00:06:54.160 20:14:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.160 20:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:54.160 ************************************ 00:06:54.160 END TEST accel_decmop_full 00:06:54.160 ************************************ 00:06:54.160 20:14:08 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:54.160 20:14:08 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:54.160 20:14:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.160 20:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:54.160 ************************************ 00:06:54.160 START TEST accel_decomp_mcore 00:06:54.160 ************************************ 00:06:54.160 20:14:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:54.160 20:14:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.160 20:14:08 -- accel/accel.sh@17 -- # local accel_module 00:06:54.160 20:14:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:54.160 20:14:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:54.160 20:14:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.160 20:14:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.160 20:14:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.160 20:14:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.160 20:14:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.160 20:14:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.160 20:14:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.160 20:14:08 -- accel/accel.sh@42 -- # jq -r . 00:06:54.160 [2024-10-16 20:14:08.720353] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:54.160 [2024-10-16 20:14:08.720457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59512 ] 00:06:54.160 [2024-10-16 20:14:08.867569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.160 [2024-10-16 20:14:09.012765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.160 [2024-10-16 20:14:09.013087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.160 [2024-10-16 20:14:09.013182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.160 [2024-10-16 20:14:09.013184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.074 20:14:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:56.074 00:06:56.074 SPDK Configuration: 00:06:56.074 Core mask: 0xf 00:06:56.074 00:06:56.074 Accel Perf Configuration: 00:06:56.074 Workload Type: decompress 00:06:56.074 Transfer size: 4096 bytes 00:06:56.074 Vector count 1 00:06:56.074 Module: software 00:06:56.074 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.074 Queue depth: 32 00:06:56.074 Allocate depth: 32 00:06:56.074 # threads/core: 1 00:06:56.074 Run time: 1 seconds 00:06:56.074 Verify: Yes 00:06:56.074 00:06:56.074 Running for 1 seconds... 00:06:56.074 00:06:56.074 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.074 ------------------------------------------------------------------------------------ 00:06:56.074 0,0 75520/s 139 MiB/s 0 0 00:06:56.074 3,0 58304/s 107 MiB/s 0 0 00:06:56.074 2,0 58336/s 107 MiB/s 0 0 00:06:56.074 1,0 58240/s 107 MiB/s 0 0 00:06:56.074 ==================================================================================== 00:06:56.074 Total 250400/s 978 MiB/s 0 0' 00:06:56.074 20:14:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:56.074 20:14:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.074 20:14:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.074 20:14:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:56.074 20:14:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.074 20:14:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.074 20:14:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.074 20:14:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.074 20:14:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.074 20:14:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.074 20:14:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.074 20:14:10 -- accel/accel.sh@42 -- # jq -r . 00:06:56.074 [2024-10-16 20:14:10.634558] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:56.074 [2024-10-16 20:14:10.634662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59541 ] 00:06:56.074 [2024-10-16 20:14:10.779726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.074 [2024-10-16 20:14:10.921018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.074 [2024-10-16 20:14:10.921202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.074 [2024-10-16 20:14:10.921531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.074 [2024-10-16 20:14:10.921548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val= 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val= 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val= 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val=0xf 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val= 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val= 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val=decompress 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val= 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val=software 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val=32 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val=32 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val=1 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val=Yes 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val= 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:56.336 20:14:11 -- accel/accel.sh@21 -- # val= 00:06:56.336 20:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # IFS=: 00:06:56.336 20:14:11 -- accel/accel.sh@20 -- # read -r var val 00:06:57.721 20:14:12 -- accel/accel.sh@21 -- # val= 00:06:57.721 20:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.721 20:14:12 -- accel/accel.sh@21 -- # val= 00:06:57.721 20:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.721 20:14:12 -- accel/accel.sh@21 -- # val= 00:06:57.721 20:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.721 20:14:12 -- accel/accel.sh@21 -- # val= 00:06:57.721 20:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.721 20:14:12 -- accel/accel.sh@21 -- # val= 00:06:57.721 20:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.721 20:14:12 -- accel/accel.sh@21 -- # val= 00:06:57.721 20:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.721 20:14:12 -- accel/accel.sh@21 -- # val= 00:06:57.721 20:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.721 20:14:12 -- accel/accel.sh@21 -- # val= 00:06:57.721 20:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.721 20:14:12 -- accel/accel.sh@21 -- # val= 00:06:57.721 20:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.721 20:14:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.721 20:14:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.721 ************************************ 00:06:57.721 END TEST accel_decomp_mcore 00:06:57.721 ************************************ 00:06:57.721 20:14:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:57.721 20:14:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.721 00:06:57.721 real 0m3.835s 00:06:57.721 user 0m11.639s 00:06:57.721 sys 0m0.272s 00:06:57.721 20:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.721 20:14:12 -- common/autotest_common.sh@10 -- # set +x 00:06:57.721 20:14:12 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.721 20:14:12 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:57.721 20:14:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.721 20:14:12 -- common/autotest_common.sh@10 -- # set +x 00:06:57.721 ************************************ 00:06:57.721 START TEST accel_decomp_full_mcore 00:06:57.721 ************************************ 00:06:57.721 20:14:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.721 20:14:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.721 20:14:12 -- accel/accel.sh@17 -- # local accel_module 00:06:57.721 20:14:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.721 20:14:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.721 20:14:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.721 20:14:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.721 20:14:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.721 20:14:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.721 20:14:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.721 20:14:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.721 20:14:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.721 20:14:12 -- accel/accel.sh@42 -- # jq -r . 00:06:57.721 [2024-10-16 20:14:12.601371] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:57.721 [2024-10-16 20:14:12.601454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59585 ] 00:06:57.982 [2024-10-16 20:14:12.745829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.243 [2024-10-16 20:14:12.962499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.243 [2024-10-16 20:14:12.962832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.243 [2024-10-16 20:14:12.963185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.243 [2024-10-16 20:14:12.963195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.195 20:14:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:00.195 00:07:00.195 SPDK Configuration: 00:07:00.195 Core mask: 0xf 00:07:00.195 00:07:00.195 Accel Perf Configuration: 00:07:00.195 Workload Type: decompress 00:07:00.195 Transfer size: 111250 bytes 00:07:00.195 Vector count 1 00:07:00.195 Module: software 00:07:00.195 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.195 Queue depth: 32 00:07:00.195 Allocate depth: 32 00:07:00.195 # threads/core: 1 00:07:00.195 Run time: 1 seconds 00:07:00.195 Verify: Yes 00:07:00.195 00:07:00.195 Running for 1 seconds... 00:07:00.195 00:07:00.195 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.195 ------------------------------------------------------------------------------------ 00:07:00.195 0,0 4352/s 179 MiB/s 0 0 00:07:00.195 3,0 5600/s 231 MiB/s 0 0 00:07:00.195 2,0 4320/s 178 MiB/s 0 0 00:07:00.195 1,0 5632/s 232 MiB/s 0 0 00:07:00.195 ==================================================================================== 00:07:00.195 Total 19904/s 2111 MiB/s 0 0' 00:07:00.195 20:14:14 -- accel/accel.sh@20 -- # IFS=: 00:07:00.195 20:14:14 -- accel/accel.sh@20 -- # read -r var val 00:07:00.195 20:14:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.195 20:14:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.195 20:14:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.195 20:14:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.195 20:14:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.195 20:14:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.195 20:14:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.195 20:14:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.195 20:14:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.195 20:14:14 -- accel/accel.sh@42 -- # jq -r . 00:07:00.195 [2024-10-16 20:14:14.817582] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:00.195 [2024-10-16 20:14:14.817711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59614 ] 00:07:00.195 [2024-10-16 20:14:14.970872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.456 [2024-10-16 20:14:15.204673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.456 [2024-10-16 20:14:15.204992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.456 [2024-10-16 20:14:15.205260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.456 [2024-10-16 20:14:15.205387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val= 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val= 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val= 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val=0xf 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val= 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val= 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val=decompress 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val= 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val=software 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val=32 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val=32 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val=1 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val=Yes 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val= 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.456 20:14:15 -- accel/accel.sh@21 -- # val= 00:07:00.456 20:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.456 20:14:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.360 20:14:16 -- accel/accel.sh@21 -- # val= 00:07:02.360 20:14:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.360 20:14:16 -- accel/accel.sh@20 -- # IFS=: 00:07:02.360 20:14:16 -- accel/accel.sh@20 -- # read -r var val 00:07:02.360 20:14:16 -- accel/accel.sh@21 -- # val= 00:07:02.360 20:14:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.360 20:14:16 -- accel/accel.sh@20 -- # IFS=: 00:07:02.360 20:14:16 -- accel/accel.sh@20 -- # read -r var val 00:07:02.361 20:14:16 -- accel/accel.sh@21 -- # val= 00:07:02.361 20:14:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # IFS=: 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # read -r var val 00:07:02.361 20:14:16 -- accel/accel.sh@21 -- # val= 00:07:02.361 20:14:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # IFS=: 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # read -r var val 00:07:02.361 20:14:16 -- accel/accel.sh@21 -- # val= 00:07:02.361 20:14:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # IFS=: 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # read -r var val 00:07:02.361 20:14:16 -- accel/accel.sh@21 -- # val= 00:07:02.361 20:14:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # IFS=: 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # read -r var val 00:07:02.361 20:14:16 -- accel/accel.sh@21 -- # val= 00:07:02.361 20:14:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # IFS=: 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # read -r var val 00:07:02.361 20:14:16 -- accel/accel.sh@21 -- # val= 00:07:02.361 20:14:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # IFS=: 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # read -r var val 00:07:02.361 20:14:16 -- accel/accel.sh@21 -- # val= 00:07:02.361 20:14:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # IFS=: 00:07:02.361 20:14:16 -- accel/accel.sh@20 -- # read -r var val 00:07:02.361 20:14:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.361 20:14:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:02.361 20:14:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.361 00:07:02.361 real 0m4.290s 00:07:02.361 user 0m12.592s 00:07:02.361 sys 0m0.345s 00:07:02.361 20:14:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.361 20:14:16 -- common/autotest_common.sh@10 -- # set +x 00:07:02.361 ************************************ 00:07:02.361 END TEST accel_decomp_full_mcore 00:07:02.361 ************************************ 00:07:02.361 20:14:16 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.361 20:14:16 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:02.361 20:14:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.361 20:14:16 -- common/autotest_common.sh@10 -- # set +x 00:07:02.361 ************************************ 00:07:02.361 START TEST accel_decomp_mthread 00:07:02.361 ************************************ 00:07:02.361 20:14:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.361 20:14:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.361 20:14:16 -- accel/accel.sh@17 -- # local accel_module 00:07:02.361 20:14:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.361 20:14:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.361 20:14:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.361 20:14:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.361 20:14:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.361 20:14:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.361 20:14:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.361 20:14:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.361 20:14:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.361 20:14:16 -- accel/accel.sh@42 -- # jq -r . 00:07:02.361 [2024-10-16 20:14:16.937428] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:02.361 [2024-10-16 20:14:16.937689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59658 ] 00:07:02.361 [2024-10-16 20:14:17.086673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.361 [2024-10-16 20:14:17.230420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.274 20:14:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:04.274 00:07:04.274 SPDK Configuration: 00:07:04.274 Core mask: 0x1 00:07:04.274 00:07:04.274 Accel Perf Configuration: 00:07:04.274 Workload Type: decompress 00:07:04.274 Transfer size: 4096 bytes 00:07:04.274 Vector count 1 00:07:04.274 Module: software 00:07:04.274 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.274 Queue depth: 32 00:07:04.274 Allocate depth: 32 00:07:04.274 # threads/core: 2 00:07:04.274 Run time: 1 seconds 00:07:04.274 Verify: Yes 00:07:04.274 00:07:04.274 Running for 1 seconds... 00:07:04.274 00:07:04.274 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.274 ------------------------------------------------------------------------------------ 00:07:04.274 0,1 41184/s 75 MiB/s 0 0 00:07:04.274 0,0 41088/s 75 MiB/s 0 0 00:07:04.274 ==================================================================================== 00:07:04.274 Total 82272/s 321 MiB/s 0 0' 00:07:04.274 20:14:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.274 20:14:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.274 20:14:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:04.274 20:14:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:04.274 20:14:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.274 20:14:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.274 20:14:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.274 20:14:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.274 20:14:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.274 20:14:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.274 20:14:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.274 20:14:18 -- accel/accel.sh@42 -- # jq -r . 00:07:04.275 [2024-10-16 20:14:18.860549] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:04.275 [2024-10-16 20:14:18.860782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59684 ] 00:07:04.275 [2024-10-16 20:14:19.007158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.275 [2024-10-16 20:14:19.149574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val= 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val= 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val= 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val=0x1 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val= 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val= 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val=decompress 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val= 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val=software 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val=32 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val=32 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val=2 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val=Yes 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val= 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.534 20:14:19 -- accel/accel.sh@21 -- # val= 00:07:04.534 20:14:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.534 20:14:19 -- accel/accel.sh@20 -- # read -r var val 00:07:05.909 20:14:20 -- accel/accel.sh@21 -- # val= 00:07:05.909 20:14:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # IFS=: 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # read -r var val 00:07:05.909 20:14:20 -- accel/accel.sh@21 -- # val= 00:07:05.909 20:14:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # IFS=: 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # read -r var val 00:07:05.909 20:14:20 -- accel/accel.sh@21 -- # val= 00:07:05.909 20:14:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # IFS=: 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # read -r var val 00:07:05.909 20:14:20 -- accel/accel.sh@21 -- # val= 00:07:05.909 20:14:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # IFS=: 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # read -r var val 00:07:05.909 20:14:20 -- accel/accel.sh@21 -- # val= 00:07:05.909 20:14:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # IFS=: 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # read -r var val 00:07:05.909 20:14:20 -- accel/accel.sh@21 -- # val= 00:07:05.909 20:14:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # IFS=: 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # read -r var val 00:07:05.909 20:14:20 -- accel/accel.sh@21 -- # val= 00:07:05.909 20:14:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # IFS=: 00:07:05.909 20:14:20 -- accel/accel.sh@20 -- # read -r var val 00:07:05.909 20:14:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.909 20:14:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:05.909 20:14:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.909 00:07:05.909 real 0m3.826s 00:07:05.909 user 0m3.394s 00:07:05.909 sys 0m0.229s 00:07:05.909 20:14:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.909 20:14:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.909 ************************************ 00:07:05.909 END TEST accel_decomp_mthread 00:07:05.909 ************************************ 00:07:05.909 20:14:20 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.909 20:14:20 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:05.909 20:14:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.909 20:14:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.909 ************************************ 00:07:05.909 START TEST accel_deomp_full_mthread 00:07:05.909 ************************************ 00:07:05.909 20:14:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.909 20:14:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.909 20:14:20 -- accel/accel.sh@17 -- # local accel_module 00:07:05.909 20:14:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.909 20:14:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.909 20:14:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.909 20:14:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.909 20:14:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.909 20:14:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.909 20:14:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.909 20:14:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.910 20:14:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.910 20:14:20 -- accel/accel.sh@42 -- # jq -r . 00:07:05.910 [2024-10-16 20:14:20.810559] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:05.910 [2024-10-16 20:14:20.810785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:07:06.169 [2024-10-16 20:14:20.951354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.169 [2024-10-16 20:14:21.094780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.069 20:14:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:08.069 00:07:08.069 SPDK Configuration: 00:07:08.069 Core mask: 0x1 00:07:08.069 00:07:08.069 Accel Perf Configuration: 00:07:08.069 Workload Type: decompress 00:07:08.069 Transfer size: 111250 bytes 00:07:08.069 Vector count 1 00:07:08.069 Module: software 00:07:08.069 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.069 Queue depth: 32 00:07:08.069 Allocate depth: 32 00:07:08.069 # threads/core: 2 00:07:08.069 Run time: 1 seconds 00:07:08.069 Verify: Yes 00:07:08.069 00:07:08.069 Running for 1 seconds... 00:07:08.069 00:07:08.069 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.069 ------------------------------------------------------------------------------------ 00:07:08.069 0,1 2752/s 113 MiB/s 0 0 00:07:08.069 0,0 2688/s 111 MiB/s 0 0 00:07:08.069 ==================================================================================== 00:07:08.069 Total 5440/s 577 MiB/s 0 0' 00:07:08.069 20:14:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.069 20:14:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.069 20:14:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.069 20:14:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.069 20:14:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.069 20:14:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.069 20:14:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.069 20:14:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.069 20:14:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.069 20:14:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.069 20:14:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.069 20:14:22 -- accel/accel.sh@42 -- # jq -r . 00:07:08.069 [2024-10-16 20:14:22.757970] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:08.069 [2024-10-16 20:14:22.758094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:07:08.069 [2024-10-16 20:14:22.906079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.328 [2024-10-16 20:14:23.085790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.328 20:14:23 -- accel/accel.sh@21 -- # val= 00:07:08.328 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val= 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val= 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val=0x1 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val= 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val= 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val=decompress 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val= 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val=software 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val=32 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val=32 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val=2 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val=Yes 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val= 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.329 20:14:23 -- accel/accel.sh@21 -- # val= 00:07:08.329 20:14:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.329 20:14:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.269 20:14:24 -- accel/accel.sh@21 -- # val= 00:07:10.269 20:14:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.269 20:14:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.269 20:14:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.269 20:14:24 -- accel/accel.sh@21 -- # val= 00:07:10.269 20:14:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.269 20:14:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.269 20:14:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.269 20:14:24 -- accel/accel.sh@21 -- # val= 00:07:10.269 20:14:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.269 20:14:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.269 20:14:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.269 20:14:24 -- accel/accel.sh@21 -- # val= 00:07:10.269 20:14:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.269 20:14:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.269 20:14:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 20:14:24 -- accel/accel.sh@21 -- # val= 00:07:10.270 20:14:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.270 20:14:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.270 20:14:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 20:14:24 -- accel/accel.sh@21 -- # val= 00:07:10.270 20:14:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.270 20:14:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.270 20:14:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 20:14:24 -- accel/accel.sh@21 -- # val= 00:07:10.270 20:14:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.270 20:14:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.270 20:14:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 20:14:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.270 20:14:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:10.270 20:14:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.270 00:07:10.270 real 0m4.095s 00:07:10.270 user 0m3.650s 00:07:10.270 sys 0m0.237s 00:07:10.270 20:14:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.270 20:14:24 -- common/autotest_common.sh@10 -- # set +x 00:07:10.270 ************************************ 00:07:10.270 END TEST accel_deomp_full_mthread 00:07:10.270 ************************************ 00:07:10.270 20:14:24 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:10.270 20:14:24 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:10.270 20:14:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:10.270 20:14:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.270 20:14:24 -- common/autotest_common.sh@10 -- # set +x 00:07:10.270 20:14:24 -- accel/accel.sh@129 -- # build_accel_config 00:07:10.270 20:14:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.270 20:14:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.270 20:14:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.270 20:14:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.270 20:14:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.270 20:14:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.270 20:14:24 -- accel/accel.sh@42 -- # jq -r . 00:07:10.270 ************************************ 00:07:10.270 START TEST accel_dif_functional_tests 00:07:10.270 ************************************ 00:07:10.270 20:14:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:10.270 [2024-10-16 20:14:24.967326] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:10.270 [2024-10-16 20:14:24.967432] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59793 ] 00:07:10.270 [2024-10-16 20:14:25.113855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.528 [2024-10-16 20:14:25.298565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.528 [2024-10-16 20:14:25.298899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.528 [2024-10-16 20:14:25.299009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.787 00:07:10.787 00:07:10.787 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.787 http://cunit.sourceforge.net/ 00:07:10.787 00:07:10.787 00:07:10.787 Suite: accel_dif 00:07:10.787 Test: verify: DIF generated, GUARD check ...passed 00:07:10.787 Test: verify: DIF generated, APPTAG check ...passed 00:07:10.787 Test: verify: DIF generated, REFTAG check ...passed 00:07:10.787 Test: verify: DIF not generated, GUARD check ...[2024-10-16 20:14:25.517078] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:10.787 passed 00:07:10.787 Test: verify: DIF not generated, APPTAG check ...passed 00:07:10.787 Test: verify: DIF not generated, REFTAG check ...passed 00:07:10.787 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:10.787 Test: verify: APPTAG incorrect, APPTAG check ...[2024-10-16 20:14:25.517135] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:10.787 [2024-10-16 20:14:25.517179] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:10.787 [2024-10-16 20:14:25.517204] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:10.787 [2024-10-16 20:14:25.517228] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:10.787 [2024-10-16 20:14:25.517248] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:10.787 passed 00:07:10.787 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:10.787 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:10.787 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-10-16 20:14:25.517316] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:10.787 passed 00:07:10.787 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:10.787 Test: generate copy: DIF generated, GUARD check ...passed 00:07:10.787 Test: generate copy: DIF generated, APTTAG check ...passed[2024-10-16 20:14:25.517488] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:10.787 00:07:10.787 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:10.787 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:10.787 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:10.787 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:10.787 Test: generate copy: iovecs-len validate ...[2024-10-16 20:14:25.517901] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:10.787 passed 00:07:10.787 Test: generate copy: buffer alignment validate ...passed 00:07:10.787 00:07:10.787 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.787 suites 1 1 n/a 0 0 00:07:10.787 tests 20 20 20 0 0 00:07:10.787 asserts 204 204 204 0 n/a 00:07:10.787 00:07:10.787 Elapsed time = 0.003 seconds 00:07:11.724 00:07:11.724 real 0m1.409s 00:07:11.724 user 0m2.629s 00:07:11.724 sys 0m0.158s 00:07:11.724 20:14:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.724 20:14:26 -- common/autotest_common.sh@10 -- # set +x 00:07:11.724 ************************************ 00:07:11.724 END TEST accel_dif_functional_tests 00:07:11.724 ************************************ 00:07:11.724 00:07:11.724 real 1m27.034s 00:07:11.724 user 1m35.145s 00:07:11.724 sys 0m6.538s 00:07:11.724 20:14:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.724 20:14:26 -- common/autotest_common.sh@10 -- # set +x 00:07:11.724 ************************************ 00:07:11.724 END TEST accel 00:07:11.724 ************************************ 00:07:11.724 20:14:26 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:11.724 20:14:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.724 20:14:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.724 20:14:26 -- common/autotest_common.sh@10 -- # set +x 00:07:11.724 ************************************ 00:07:11.724 START TEST accel_rpc 00:07:11.724 ************************************ 00:07:11.724 20:14:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:11.724 * Looking for test storage... 00:07:11.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:11.724 20:14:26 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:11.724 20:14:26 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59863 00:07:11.724 20:14:26 -- accel/accel_rpc.sh@15 -- # waitforlisten 59863 00:07:11.724 20:14:26 -- common/autotest_common.sh@819 -- # '[' -z 59863 ']' 00:07:11.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.724 20:14:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.724 20:14:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:11.724 20:14:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.724 20:14:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:11.724 20:14:26 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:11.724 20:14:26 -- common/autotest_common.sh@10 -- # set +x 00:07:11.724 [2024-10-16 20:14:26.514368] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:11.724 [2024-10-16 20:14:26.514483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59863 ] 00:07:11.983 [2024-10-16 20:14:26.660313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.983 [2024-10-16 20:14:26.834299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:11.984 [2024-10-16 20:14:26.834510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.549 20:14:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:12.549 20:14:27 -- common/autotest_common.sh@852 -- # return 0 00:07:12.549 20:14:27 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:12.549 20:14:27 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:12.549 20:14:27 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:12.549 20:14:27 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:12.549 20:14:27 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:12.549 20:14:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.549 20:14:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.549 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:07:12.549 ************************************ 00:07:12.549 START TEST accel_assign_opcode 00:07:12.549 ************************************ 00:07:12.549 20:14:27 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:12.549 20:14:27 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:12.549 20:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.549 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:07:12.549 [2024-10-16 20:14:27.335161] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:12.549 20:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.549 20:14:27 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:12.549 20:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.549 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:07:12.549 [2024-10-16 20:14:27.343111] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:12.549 20:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.549 20:14:27 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:12.549 20:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.549 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.116 20:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:13.116 20:14:27 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:13.116 20:14:27 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:13.116 20:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:13.116 20:14:27 -- accel/accel_rpc.sh@42 -- # grep software 00:07:13.116 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.116 20:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:13.116 software 00:07:13.116 00:07:13.116 real 0m0.578s 00:07:13.116 user 0m0.034s 00:07:13.116 sys 0m0.008s 00:07:13.116 20:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.116 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.116 ************************************ 00:07:13.116 END TEST accel_assign_opcode 00:07:13.116 ************************************ 00:07:13.116 20:14:27 -- accel/accel_rpc.sh@55 -- # killprocess 59863 00:07:13.116 20:14:27 -- common/autotest_common.sh@926 -- # '[' -z 59863 ']' 00:07:13.116 20:14:27 -- common/autotest_common.sh@930 -- # kill -0 59863 00:07:13.116 20:14:27 -- common/autotest_common.sh@931 -- # uname 00:07:13.116 20:14:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:13.116 20:14:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59863 00:07:13.116 20:14:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:13.116 killing process with pid 59863 00:07:13.116 20:14:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:13.116 20:14:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59863' 00:07:13.116 20:14:27 -- common/autotest_common.sh@945 -- # kill 59863 00:07:13.116 20:14:27 -- common/autotest_common.sh@950 -- # wait 59863 00:07:15.017 00:07:15.017 real 0m3.210s 00:07:15.017 user 0m3.181s 00:07:15.017 sys 0m0.373s 00:07:15.017 20:14:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.017 ************************************ 00:07:15.017 END TEST accel_rpc 00:07:15.017 ************************************ 00:07:15.017 20:14:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.017 20:14:29 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:15.017 20:14:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.017 20:14:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.017 20:14:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.017 ************************************ 00:07:15.017 START TEST app_cmdline 00:07:15.017 ************************************ 00:07:15.017 20:14:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:15.017 * Looking for test storage... 00:07:15.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:15.017 20:14:29 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:15.017 20:14:29 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59967 00:07:15.017 20:14:29 -- app/cmdline.sh@18 -- # waitforlisten 59967 00:07:15.017 20:14:29 -- common/autotest_common.sh@819 -- # '[' -z 59967 ']' 00:07:15.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.017 20:14:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.017 20:14:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:15.017 20:14:29 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:15.017 20:14:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.017 20:14:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:15.017 20:14:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.017 [2024-10-16 20:14:29.768441] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:15.017 [2024-10-16 20:14:29.768559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59967 ] 00:07:15.017 [2024-10-16 20:14:29.914631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.275 [2024-10-16 20:14:30.101313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:15.275 [2024-10-16 20:14:30.101502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.649 20:14:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:16.649 20:14:31 -- common/autotest_common.sh@852 -- # return 0 00:07:16.649 20:14:31 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:16.649 { 00:07:16.649 "version": "SPDK v24.01.1-pre git sha1 726a04d70", 00:07:16.649 "fields": { 00:07:16.649 "major": 24, 00:07:16.649 "minor": 1, 00:07:16.649 "patch": 1, 00:07:16.649 "suffix": "-pre", 00:07:16.649 "commit": "726a04d70" 00:07:16.649 } 00:07:16.649 } 00:07:16.649 20:14:31 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:16.649 20:14:31 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:16.649 20:14:31 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:16.649 20:14:31 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:16.649 20:14:31 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:16.649 20:14:31 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:16.649 20:14:31 -- app/cmdline.sh@26 -- # sort 00:07:16.649 20:14:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.649 20:14:31 -- common/autotest_common.sh@10 -- # set +x 00:07:16.649 20:14:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.649 20:14:31 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:16.649 20:14:31 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:16.649 20:14:31 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.649 20:14:31 -- common/autotest_common.sh@640 -- # local es=0 00:07:16.649 20:14:31 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.649 20:14:31 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.649 20:14:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:16.649 20:14:31 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.649 20:14:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:16.649 20:14:31 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.649 20:14:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:16.649 20:14:31 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.649 20:14:31 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:16.649 20:14:31 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.909 request: 00:07:16.909 { 00:07:16.909 "method": "env_dpdk_get_mem_stats", 00:07:16.909 "req_id": 1 00:07:16.909 } 00:07:16.909 Got JSON-RPC error response 00:07:16.909 response: 00:07:16.909 { 00:07:16.909 "code": -32601, 00:07:16.909 "message": "Method not found" 00:07:16.909 } 00:07:16.909 20:14:31 -- common/autotest_common.sh@643 -- # es=1 00:07:16.909 20:14:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:16.909 20:14:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:16.909 20:14:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:16.909 20:14:31 -- app/cmdline.sh@1 -- # killprocess 59967 00:07:16.909 20:14:31 -- common/autotest_common.sh@926 -- # '[' -z 59967 ']' 00:07:16.909 20:14:31 -- common/autotest_common.sh@930 -- # kill -0 59967 00:07:16.909 20:14:31 -- common/autotest_common.sh@931 -- # uname 00:07:16.909 20:14:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:16.909 20:14:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59967 00:07:16.909 20:14:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:16.909 killing process with pid 59967 00:07:16.909 20:14:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:16.909 20:14:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59967' 00:07:16.909 20:14:31 -- common/autotest_common.sh@945 -- # kill 59967 00:07:16.909 20:14:31 -- common/autotest_common.sh@950 -- # wait 59967 00:07:18.294 00:07:18.294 real 0m3.565s 00:07:18.294 user 0m3.994s 00:07:18.294 sys 0m0.409s 00:07:18.294 20:14:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.294 20:14:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.294 ************************************ 00:07:18.294 END TEST app_cmdline 00:07:18.294 ************************************ 00:07:18.553 20:14:33 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:18.553 20:14:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.553 20:14:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.553 20:14:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.553 ************************************ 00:07:18.553 START TEST version 00:07:18.553 ************************************ 00:07:18.553 20:14:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:18.553 * Looking for test storage... 00:07:18.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:18.553 20:14:33 -- app/version.sh@17 -- # get_header_version major 00:07:18.553 20:14:33 -- app/version.sh@14 -- # cut -f2 00:07:18.553 20:14:33 -- app/version.sh@14 -- # tr -d '"' 00:07:18.553 20:14:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.553 20:14:33 -- app/version.sh@17 -- # major=24 00:07:18.553 20:14:33 -- app/version.sh@18 -- # get_header_version minor 00:07:18.553 20:14:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.553 20:14:33 -- app/version.sh@14 -- # tr -d '"' 00:07:18.553 20:14:33 -- app/version.sh@14 -- # cut -f2 00:07:18.553 20:14:33 -- app/version.sh@18 -- # minor=1 00:07:18.553 20:14:33 -- app/version.sh@19 -- # get_header_version patch 00:07:18.553 20:14:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.553 20:14:33 -- app/version.sh@14 -- # cut -f2 00:07:18.553 20:14:33 -- app/version.sh@14 -- # tr -d '"' 00:07:18.553 20:14:33 -- app/version.sh@19 -- # patch=1 00:07:18.553 20:14:33 -- app/version.sh@20 -- # get_header_version suffix 00:07:18.553 20:14:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.553 20:14:33 -- app/version.sh@14 -- # tr -d '"' 00:07:18.553 20:14:33 -- app/version.sh@14 -- # cut -f2 00:07:18.553 20:14:33 -- app/version.sh@20 -- # suffix=-pre 00:07:18.553 20:14:33 -- app/version.sh@22 -- # version=24.1 00:07:18.553 20:14:33 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:18.553 20:14:33 -- app/version.sh@25 -- # version=24.1.1 00:07:18.553 20:14:33 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:18.553 20:14:33 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:18.553 20:14:33 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:18.553 20:14:33 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:18.553 20:14:33 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:18.553 00:07:18.553 real 0m0.114s 00:07:18.553 user 0m0.064s 00:07:18.553 sys 0m0.069s 00:07:18.553 20:14:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.553 20:14:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.553 ************************************ 00:07:18.553 END TEST version 00:07:18.553 ************************************ 00:07:18.553 20:14:33 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:18.553 20:14:33 -- spdk/autotest.sh@204 -- # uname -s 00:07:18.553 20:14:33 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:18.553 20:14:33 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:18.553 20:14:33 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:18.553 20:14:33 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:07:18.553 20:14:33 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:18.553 20:14:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:18.553 20:14:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.553 20:14:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.553 ************************************ 00:07:18.553 START TEST blockdev_nvme 00:07:18.553 ************************************ 00:07:18.553 20:14:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:18.553 * Looking for test storage... 00:07:18.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:18.553 20:14:33 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:18.553 20:14:33 -- bdev/nbd_common.sh@6 -- # set -e 00:07:18.815 20:14:33 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:18.815 20:14:33 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:18.815 20:14:33 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:18.815 20:14:33 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:18.815 20:14:33 -- bdev/blockdev.sh@18 -- # : 00:07:18.815 20:14:33 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:18.815 20:14:33 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:18.815 20:14:33 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:18.815 20:14:33 -- bdev/blockdev.sh@672 -- # uname -s 00:07:18.815 20:14:33 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:18.815 20:14:33 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:18.815 20:14:33 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:18.815 20:14:33 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:18.815 20:14:33 -- bdev/blockdev.sh@682 -- # dek= 00:07:18.815 20:14:33 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:18.815 20:14:33 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:18.815 20:14:33 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:18.815 20:14:33 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:18.815 20:14:33 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:18.815 20:14:33 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:18.815 20:14:33 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60136 00:07:18.815 20:14:33 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:18.815 20:14:33 -- bdev/blockdev.sh@47 -- # waitforlisten 60136 00:07:18.815 20:14:33 -- common/autotest_common.sh@819 -- # '[' -z 60136 ']' 00:07:18.815 20:14:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.815 20:14:33 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:18.815 20:14:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.815 20:14:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.815 20:14:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.815 20:14:33 -- common/autotest_common.sh@10 -- # set +x 00:07:18.815 [2024-10-16 20:14:33.558059] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:18.815 [2024-10-16 20:14:33.558173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60136 ] 00:07:18.815 [2024-10-16 20:14:33.709129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.073 [2024-10-16 20:14:33.892532] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:19.073 [2024-10-16 20:14:33.892731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.447 20:14:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:20.447 20:14:35 -- common/autotest_common.sh@852 -- # return 0 00:07:20.447 20:14:35 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:20.447 20:14:35 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:20.447 20:14:35 -- bdev/blockdev.sh@79 -- # local json 00:07:20.447 20:14:35 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:20.447 20:14:35 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:20.447 20:14:35 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:20.447 20:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.447 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:07:20.706 20:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.706 20:14:35 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:20.706 20:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.706 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:07:20.706 20:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.706 20:14:35 -- bdev/blockdev.sh@738 -- # cat 00:07:20.706 20:14:35 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:20.706 20:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.706 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:07:20.706 20:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.706 20:14:35 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:20.706 20:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.706 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:07:20.706 20:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.706 20:14:35 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:20.706 20:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.706 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:07:20.706 20:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.706 20:14:35 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:20.706 20:14:35 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:20.706 20:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.706 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:07:20.706 20:14:35 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:20.706 20:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.706 20:14:35 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:20.706 20:14:35 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:20.707 20:14:35 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b641324c-840d-4fef-9c66-a3ee127e193a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b641324c-840d-4fef-9c66-a3ee127e193a",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "796865ab-ad83-40da-960c-4c3764e10f85"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "796865ab-ad83-40da-960c-4c3764e10f85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a6877829-310f-4dfd-8efd-d2bea889e684"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a6877829-310f-4dfd-8efd-d2bea889e684",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "481a5bce-972a-4fbe-9153-5252dbd7350a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "481a5bce-972a-4fbe-9153-5252dbd7350a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8f1bf8df-74bd-414c-9630-f4a0255542ea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8f1bf8df-74bd-414c-9630-f4a0255542ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "475fe6f3-c5d6-4ec0-8d29-e0921227b057"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "475fe6f3-c5d6-4ec0-8d29-e0921227b057",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:20.707 20:14:35 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:20.707 20:14:35 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:20.707 20:14:35 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:20.707 20:14:35 -- bdev/blockdev.sh@752 -- # killprocess 60136 00:07:20.707 20:14:35 -- common/autotest_common.sh@926 -- # '[' -z 60136 ']' 00:07:20.707 20:14:35 -- common/autotest_common.sh@930 -- # kill -0 60136 00:07:20.707 20:14:35 -- common/autotest_common.sh@931 -- # uname 00:07:20.707 20:14:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:20.707 20:14:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60136 00:07:20.707 20:14:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:20.707 20:14:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:20.707 killing process with pid 60136 00:07:20.707 20:14:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60136' 00:07:20.707 20:14:35 -- common/autotest_common.sh@945 -- # kill 60136 00:07:20.707 20:14:35 -- common/autotest_common.sh@950 -- # wait 60136 00:07:22.084 20:14:36 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:22.084 20:14:36 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:22.084 20:14:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:22.084 20:14:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.084 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:07:22.084 ************************************ 00:07:22.084 START TEST bdev_hello_world 00:07:22.084 ************************************ 00:07:22.084 20:14:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:22.360 [2024-10-16 20:14:37.029107] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:22.360 [2024-10-16 20:14:37.029212] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60228 ] 00:07:22.360 [2024-10-16 20:14:37.174944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.652 [2024-10-16 20:14:37.320189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.912 [2024-10-16 20:14:37.785903] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:22.912 [2024-10-16 20:14:37.785952] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:22.912 [2024-10-16 20:14:37.785968] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:22.912 [2024-10-16 20:14:37.787850] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:22.912 [2024-10-16 20:14:37.788368] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:22.912 [2024-10-16 20:14:37.788390] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:22.912 [2024-10-16 20:14:37.788741] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:22.912 00:07:22.912 [2024-10-16 20:14:37.788762] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:23.848 00:07:23.848 real 0m1.446s 00:07:23.848 user 0m1.194s 00:07:23.848 sys 0m0.147s 00:07:23.848 ************************************ 00:07:23.848 END TEST bdev_hello_world 00:07:23.848 ************************************ 00:07:23.848 20:14:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.848 20:14:38 -- common/autotest_common.sh@10 -- # set +x 00:07:23.848 20:14:38 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:23.848 20:14:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:23.848 20:14:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.848 20:14:38 -- common/autotest_common.sh@10 -- # set +x 00:07:23.848 ************************************ 00:07:23.848 START TEST bdev_bounds 00:07:23.848 ************************************ 00:07:23.848 20:14:38 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:07:23.848 20:14:38 -- bdev/blockdev.sh@288 -- # bdevio_pid=60264 00:07:23.848 20:14:38 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.848 Process bdevio pid: 60264 00:07:23.848 20:14:38 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60264' 00:07:23.848 20:14:38 -- bdev/blockdev.sh@291 -- # waitforlisten 60264 00:07:23.848 20:14:38 -- common/autotest_common.sh@819 -- # '[' -z 60264 ']' 00:07:23.848 20:14:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.848 20:14:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:23.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.848 20:14:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.848 20:14:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:23.848 20:14:38 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:23.848 20:14:38 -- common/autotest_common.sh@10 -- # set +x 00:07:23.848 [2024-10-16 20:14:38.520202] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:23.848 [2024-10-16 20:14:38.520293] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60264 ] 00:07:23.848 [2024-10-16 20:14:38.659104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.106 [2024-10-16 20:14:38.807123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.106 [2024-10-16 20:14:38.807242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.106 [2024-10-16 20:14:38.807347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.675 20:14:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:24.675 20:14:39 -- common/autotest_common.sh@852 -- # return 0 00:07:24.675 20:14:39 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:24.675 I/O targets: 00:07:24.675 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:24.675 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:24.675 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:24.675 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:24.675 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:24.675 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:24.675 00:07:24.675 00:07:24.675 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.675 http://cunit.sourceforge.net/ 00:07:24.675 00:07:24.675 00:07:24.675 Suite: bdevio tests on: Nvme3n1 00:07:24.675 Test: blockdev write read block ...passed 00:07:24.675 Test: blockdev write zeroes read block ...passed 00:07:24.675 Test: blockdev write zeroes read no split ...passed 00:07:24.675 Test: blockdev write zeroes read split ...passed 00:07:24.675 Test: blockdev write zeroes read split partial ...passed 00:07:24.675 Test: blockdev reset ...[2024-10-16 20:14:39.483406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:24.675 [2024-10-16 20:14:39.486286] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:24.675 passed 00:07:24.675 Test: blockdev write read 8 blocks ...passed 00:07:24.675 Test: blockdev write read size > 128k ...passed 00:07:24.675 Test: blockdev write read invalid size ...passed 00:07:24.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:24.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:24.675 Test: blockdev write read max offset ...passed 00:07:24.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:24.675 Test: blockdev writev readv 8 blocks ...passed 00:07:24.675 Test: blockdev writev readv 30 x 1block ...passed 00:07:24.675 Test: blockdev writev readv block ...passed 00:07:24.675 Test: blockdev writev readv size > 128k ...passed 00:07:24.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:24.675 Test: blockdev comparev and writev ...[2024-10-16 20:14:39.494238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26fa0e000 len:0x1000 00:07:24.675 [2024-10-16 20:14:39.494361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:24.675 passed 00:07:24.675 Test: blockdev nvme passthru rw ...passed 00:07:24.675 Test: blockdev nvme passthru vendor specific ...[2024-10-16 20:14:39.494994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:24.675 [2024-10-16 20:14:39.495090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:24.675 passed 00:07:24.675 Test: blockdev nvme admin passthru ...passed 00:07:24.675 Test: blockdev copy ...passed 00:07:24.675 Suite: bdevio tests on: Nvme2n3 00:07:24.675 Test: blockdev write read block ...passed 00:07:24.675 Test: blockdev write zeroes read block ...passed 00:07:24.675 Test: blockdev write zeroes read no split ...passed 00:07:24.675 Test: blockdev write zeroes read split ...passed 00:07:24.675 Test: blockdev write zeroes read split partial ...passed 00:07:24.675 Test: blockdev reset ...[2024-10-16 20:14:39.550950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:24.675 [2024-10-16 20:14:39.555542] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:24.675 passed 00:07:24.675 Test: blockdev write read 8 blocks ...passed 00:07:24.675 Test: blockdev write read size > 128k ...passed 00:07:24.675 Test: blockdev write read invalid size ...passed 00:07:24.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:24.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:24.675 Test: blockdev write read max offset ...passed 00:07:24.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:24.675 Test: blockdev writev readv 8 blocks ...passed 00:07:24.675 Test: blockdev writev readv 30 x 1block ...passed 00:07:24.675 Test: blockdev writev readv block ...passed 00:07:24.675 Test: blockdev writev readv size > 128k ...passed 00:07:24.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:24.675 Test: blockdev comparev and writev ...[2024-10-16 20:14:39.567259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26fa0a000 len:0x1000 00:07:24.675 [2024-10-16 20:14:39.567360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:24.675 passed 00:07:24.675 Test: blockdev nvme passthru rw ...passed 00:07:24.675 Test: blockdev nvme passthru vendor specific ...[2024-10-16 20:14:39.568672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:24.675 [2024-10-16 20:14:39.568748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:24.675 passed 00:07:24.675 Test: blockdev nvme admin passthru ...passed 00:07:24.675 Test: blockdev copy ...passed 00:07:24.675 Suite: bdevio tests on: Nvme2n2 00:07:24.675 Test: blockdev write read block ...passed 00:07:24.675 Test: blockdev write zeroes read block ...passed 00:07:24.675 Test: blockdev write zeroes read no split ...passed 00:07:24.675 Test: blockdev write zeroes read split ...passed 00:07:24.934 Test: blockdev write zeroes read split partial ...passed 00:07:24.934 Test: blockdev reset ...[2024-10-16 20:14:39.620545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:24.934 [2024-10-16 20:14:39.623281] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:24.934 passed 00:07:24.934 Test: blockdev write read 8 blocks ...passed 00:07:24.934 Test: blockdev write read size > 128k ...passed 00:07:24.934 Test: blockdev write read invalid size ...passed 00:07:24.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:24.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:24.934 Test: blockdev write read max offset ...passed 00:07:24.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:24.934 Test: blockdev writev readv 8 blocks ...passed 00:07:24.934 Test: blockdev writev readv 30 x 1block ...passed 00:07:24.934 Test: blockdev writev readv block ...passed 00:07:24.934 Test: blockdev writev readv size > 128k ...passed 00:07:24.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:24.934 Test: blockdev comparev and writev ...[2024-10-16 20:14:39.633352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26fe06000 len:0x1000 00:07:24.934 [2024-10-16 20:14:39.633446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:24.934 passed 00:07:24.934 Test: blockdev nvme passthru rw ...passed 00:07:24.934 Test: blockdev nvme passthru vendor specific ...[2024-10-16 20:14:39.634775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:24.934 [2024-10-16 20:14:39.634846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:24.934 passed 00:07:24.934 Test: blockdev nvme admin passthru ...passed 00:07:24.934 Test: blockdev copy ...passed 00:07:24.934 Suite: bdevio tests on: Nvme2n1 00:07:24.934 Test: blockdev write read block ...passed 00:07:24.934 Test: blockdev write zeroes read block ...passed 00:07:24.934 Test: blockdev write zeroes read no split ...passed 00:07:24.934 Test: blockdev write zeroes read split ...passed 00:07:24.934 Test: blockdev write zeroes read split partial ...passed 00:07:24.934 Test: blockdev reset ...[2024-10-16 20:14:39.690255] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:24.934 [2024-10-16 20:14:39.693311] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:24.934 passed 00:07:24.934 Test: blockdev write read 8 blocks ...passed 00:07:24.934 Test: blockdev write read size > 128k ...passed 00:07:24.934 Test: blockdev write read invalid size ...passed 00:07:24.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:24.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:24.934 Test: blockdev write read max offset ...passed 00:07:24.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:24.934 Test: blockdev writev readv 8 blocks ...passed 00:07:24.934 Test: blockdev writev readv 30 x 1block ...passed 00:07:24.934 Test: blockdev writev readv block ...passed 00:07:24.934 Test: blockdev writev readv size > 128k ...passed 00:07:24.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:24.934 Test: blockdev comparev and writev ...[2024-10-16 20:14:39.708109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26fe01000 len:0x1000 00:07:24.934 [2024-10-16 20:14:39.708208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:24.934 passed 00:07:24.934 Test: blockdev nvme passthru rw ...passed 00:07:24.934 Test: blockdev nvme passthru vendor specific ...[2024-10-16 20:14:39.710163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:24.934 [2024-10-16 20:14:39.710245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:24.934 passed 00:07:24.934 Test: blockdev nvme admin passthru ...passed 00:07:24.934 Test: blockdev copy ...passed 00:07:24.934 Suite: bdevio tests on: Nvme1n1 00:07:24.934 Test: blockdev write read block ...passed 00:07:24.934 Test: blockdev write zeroes read block ...passed 00:07:24.934 Test: blockdev write zeroes read no split ...passed 00:07:24.934 Test: blockdev write zeroes read split ...passed 00:07:24.934 Test: blockdev write zeroes read split partial ...passed 00:07:24.934 Test: blockdev reset ...[2024-10-16 20:14:39.771436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:24.934 [2024-10-16 20:14:39.774952] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:24.934 passed 00:07:24.934 Test: blockdev write read 8 blocks ...passed 00:07:24.934 Test: blockdev write read size > 128k ...passed 00:07:24.934 Test: blockdev write read invalid size ...passed 00:07:24.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:24.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:24.934 Test: blockdev write read max offset ...passed 00:07:24.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:24.934 Test: blockdev writev readv 8 blocks ...passed 00:07:24.934 Test: blockdev writev readv 30 x 1block ...passed 00:07:24.934 Test: blockdev writev readv block ...passed 00:07:24.934 Test: blockdev writev readv size > 128k ...passed 00:07:24.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:24.934 Test: blockdev comparev and writev ...[2024-10-16 20:14:39.786137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x270a06000 len:0x1000 00:07:24.934 [2024-10-16 20:14:39.786243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:24.934 passed 00:07:24.934 Test: blockdev nvme passthru rw ...passed 00:07:24.934 Test: blockdev nvme passthru vendor specific ...[2024-10-16 20:14:39.787691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:24.934 [2024-10-16 20:14:39.787765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:24.934 passed 00:07:24.934 Test: blockdev nvme admin passthru ...passed 00:07:24.934 Test: blockdev copy ...passed 00:07:24.934 Suite: bdevio tests on: Nvme0n1 00:07:24.934 Test: blockdev write read block ...passed 00:07:24.934 Test: blockdev write zeroes read block ...passed 00:07:24.934 Test: blockdev write zeroes read no split ...passed 00:07:24.934 Test: blockdev write zeroes read split ...passed 00:07:24.934 Test: blockdev write zeroes read split partial ...passed 00:07:24.934 Test: blockdev reset ...[2024-10-16 20:14:39.840168] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:24.934 [2024-10-16 20:14:39.843303] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:24.934 passed 00:07:24.934 Test: blockdev write read 8 blocks ...passed 00:07:24.934 Test: blockdev write read size > 128k ...passed 00:07:24.934 Test: blockdev write read invalid size ...passed 00:07:24.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:24.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:24.934 Test: blockdev write read max offset ...passed 00:07:24.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:24.934 Test: blockdev writev readv 8 blocks ...passed 00:07:24.934 Test: blockdev writev readv 30 x 1block ...passed 00:07:24.934 Test: blockdev writev readv block ...passed 00:07:24.934 Test: blockdev writev readv size > 128k ...passed 00:07:24.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:24.934 Test: blockdev comparev and writev ...passed 00:07:24.934 Test: blockdev nvme passthru rw ...[2024-10-16 20:14:39.852596] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:24.934 separate metadata which is not supported yet. 00:07:24.934 passed 00:07:24.934 Test: blockdev nvme passthru vendor specific ...[2024-10-16 20:14:39.853661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:24.934 [2024-10-16 20:14:39.853745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:24.934 passed 00:07:24.934 Test: blockdev nvme admin passthru ...passed 00:07:24.934 Test: blockdev copy ...passed 00:07:24.934 00:07:24.934 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.934 suites 6 6 n/a 0 0 00:07:24.934 tests 138 138 138 0 0 00:07:24.934 asserts 893 893 893 0 n/a 00:07:24.934 00:07:24.934 Elapsed time = 1.119 seconds 00:07:25.193 0 00:07:25.193 20:14:39 -- bdev/blockdev.sh@293 -- # killprocess 60264 00:07:25.193 20:14:39 -- common/autotest_common.sh@926 -- # '[' -z 60264 ']' 00:07:25.193 20:14:39 -- common/autotest_common.sh@930 -- # kill -0 60264 00:07:25.193 20:14:39 -- common/autotest_common.sh@931 -- # uname 00:07:25.193 20:14:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:25.193 20:14:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60264 00:07:25.193 20:14:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:25.193 killing process with pid 60264 00:07:25.193 20:14:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:25.193 20:14:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60264' 00:07:25.193 20:14:39 -- common/autotest_common.sh@945 -- # kill 60264 00:07:25.193 20:14:39 -- common/autotest_common.sh@950 -- # wait 60264 00:07:25.761 20:14:40 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:25.761 00:07:25.761 real 0m2.109s 00:07:25.761 user 0m5.232s 00:07:25.761 sys 0m0.256s 00:07:25.761 ************************************ 00:07:25.761 END TEST bdev_bounds 00:07:25.761 ************************************ 00:07:25.761 20:14:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.761 20:14:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.761 20:14:40 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:25.761 20:14:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:25.761 20:14:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.761 20:14:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.761 ************************************ 00:07:25.761 START TEST bdev_nbd 00:07:25.761 ************************************ 00:07:25.761 20:14:40 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:25.761 20:14:40 -- bdev/blockdev.sh@298 -- # uname -s 00:07:25.761 20:14:40 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:25.761 20:14:40 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.761 20:14:40 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:25.761 20:14:40 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:25.761 20:14:40 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:25.761 20:14:40 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:25.761 20:14:40 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:25.761 20:14:40 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:25.761 20:14:40 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:25.761 20:14:40 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:25.761 20:14:40 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:25.761 20:14:40 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:25.761 20:14:40 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:25.761 20:14:40 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:25.761 20:14:40 -- bdev/blockdev.sh@316 -- # nbd_pid=60318 00:07:25.761 20:14:40 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:25.761 20:14:40 -- bdev/blockdev.sh@318 -- # waitforlisten 60318 /var/tmp/spdk-nbd.sock 00:07:25.761 20:14:40 -- common/autotest_common.sh@819 -- # '[' -z 60318 ']' 00:07:25.761 20:14:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:25.761 20:14:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:25.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:25.761 20:14:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:25.761 20:14:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:25.761 20:14:40 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:25.761 20:14:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.761 [2024-10-16 20:14:40.679597] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:25.761 [2024-10-16 20:14:40.679706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.020 [2024-10-16 20:14:40.825588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.278 [2024-10-16 20:14:40.972325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.844 20:14:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:26.844 20:14:41 -- common/autotest_common.sh@852 -- # return 0 00:07:26.844 20:14:41 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@24 -- # local i 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:26.844 20:14:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:26.844 20:14:41 -- common/autotest_common.sh@857 -- # local i 00:07:26.844 20:14:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:26.844 20:14:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:26.844 20:14:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:26.844 20:14:41 -- common/autotest_common.sh@861 -- # break 00:07:26.844 20:14:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:26.844 20:14:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:26.844 20:14:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.844 1+0 records in 00:07:26.844 1+0 records out 00:07:26.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286699 s, 14.3 MB/s 00:07:26.844 20:14:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.844 20:14:41 -- common/autotest_common.sh@874 -- # size=4096 00:07:26.844 20:14:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.844 20:14:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:26.844 20:14:41 -- common/autotest_common.sh@877 -- # return 0 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:26.844 20:14:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:27.102 20:14:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:27.102 20:14:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:27.102 20:14:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:27.102 20:14:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:27.102 20:14:41 -- common/autotest_common.sh@857 -- # local i 00:07:27.102 20:14:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:27.102 20:14:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:27.102 20:14:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:27.102 20:14:41 -- common/autotest_common.sh@861 -- # break 00:07:27.102 20:14:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:27.102 20:14:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:27.102 20:14:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.102 1+0 records in 00:07:27.102 1+0 records out 00:07:27.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319778 s, 12.8 MB/s 00:07:27.102 20:14:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.102 20:14:41 -- common/autotest_common.sh@874 -- # size=4096 00:07:27.102 20:14:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.102 20:14:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:27.102 20:14:41 -- common/autotest_common.sh@877 -- # return 0 00:07:27.102 20:14:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:27.102 20:14:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:27.102 20:14:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:27.361 20:14:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:27.361 20:14:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:27.361 20:14:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:27.361 20:14:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:07:27.361 20:14:42 -- common/autotest_common.sh@857 -- # local i 00:07:27.361 20:14:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:27.361 20:14:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:27.361 20:14:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:07:27.361 20:14:42 -- common/autotest_common.sh@861 -- # break 00:07:27.361 20:14:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:27.361 20:14:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:27.361 20:14:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.361 1+0 records in 00:07:27.361 1+0 records out 00:07:27.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288469 s, 14.2 MB/s 00:07:27.361 20:14:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.361 20:14:42 -- common/autotest_common.sh@874 -- # size=4096 00:07:27.361 20:14:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.361 20:14:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:27.361 20:14:42 -- common/autotest_common.sh@877 -- # return 0 00:07:27.361 20:14:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:27.361 20:14:42 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:27.361 20:14:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:27.663 20:14:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:07:27.663 20:14:42 -- common/autotest_common.sh@857 -- # local i 00:07:27.663 20:14:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:27.663 20:14:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:27.663 20:14:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:07:27.663 20:14:42 -- common/autotest_common.sh@861 -- # break 00:07:27.663 20:14:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:27.663 20:14:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:27.663 20:14:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.663 1+0 records in 00:07:27.663 1+0 records out 00:07:27.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266927 s, 15.3 MB/s 00:07:27.663 20:14:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.663 20:14:42 -- common/autotest_common.sh@874 -- # size=4096 00:07:27.663 20:14:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.663 20:14:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:27.663 20:14:42 -- common/autotest_common.sh@877 -- # return 0 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:27.663 20:14:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:07:27.663 20:14:42 -- common/autotest_common.sh@857 -- # local i 00:07:27.663 20:14:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:27.663 20:14:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:27.663 20:14:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:07:27.663 20:14:42 -- common/autotest_common.sh@861 -- # break 00:07:27.663 20:14:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:27.663 20:14:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:27.663 20:14:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.663 1+0 records in 00:07:27.663 1+0 records out 00:07:27.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396172 s, 10.3 MB/s 00:07:27.663 20:14:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.663 20:14:42 -- common/autotest_common.sh@874 -- # size=4096 00:07:27.663 20:14:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.663 20:14:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:27.663 20:14:42 -- common/autotest_common.sh@877 -- # return 0 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:27.663 20:14:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:27.921 20:14:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:27.921 20:14:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:27.921 20:14:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:27.921 20:14:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:07:27.921 20:14:42 -- common/autotest_common.sh@857 -- # local i 00:07:27.921 20:14:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:27.921 20:14:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:27.921 20:14:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:07:27.921 20:14:42 -- common/autotest_common.sh@861 -- # break 00:07:27.921 20:14:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:27.921 20:14:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:27.921 20:14:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.921 1+0 records in 00:07:27.921 1+0 records out 00:07:27.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619481 s, 6.6 MB/s 00:07:27.921 20:14:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.921 20:14:42 -- common/autotest_common.sh@874 -- # size=4096 00:07:27.921 20:14:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.921 20:14:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:27.921 20:14:42 -- common/autotest_common.sh@877 -- # return 0 00:07:27.921 20:14:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:27.921 20:14:42 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:27.921 20:14:42 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.179 20:14:42 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd0", 00:07:28.179 "bdev_name": "Nvme0n1" 00:07:28.179 }, 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd1", 00:07:28.179 "bdev_name": "Nvme1n1" 00:07:28.179 }, 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd2", 00:07:28.179 "bdev_name": "Nvme2n1" 00:07:28.179 }, 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd3", 00:07:28.179 "bdev_name": "Nvme2n2" 00:07:28.179 }, 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd4", 00:07:28.179 "bdev_name": "Nvme2n3" 00:07:28.179 }, 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd5", 00:07:28.179 "bdev_name": "Nvme3n1" 00:07:28.179 } 00:07:28.179 ]' 00:07:28.179 20:14:42 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:28.179 20:14:42 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:28.179 20:14:42 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd0", 00:07:28.179 "bdev_name": "Nvme0n1" 00:07:28.179 }, 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd1", 00:07:28.179 "bdev_name": "Nvme1n1" 00:07:28.179 }, 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd2", 00:07:28.179 "bdev_name": "Nvme2n1" 00:07:28.179 }, 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd3", 00:07:28.179 "bdev_name": "Nvme2n2" 00:07:28.179 }, 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd4", 00:07:28.179 "bdev_name": "Nvme2n3" 00:07:28.179 }, 00:07:28.179 { 00:07:28.179 "nbd_device": "/dev/nbd5", 00:07:28.179 "bdev_name": "Nvme3n1" 00:07:28.179 } 00:07:28.179 ]' 00:07:28.179 20:14:43 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:28.179 20:14:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.179 20:14:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:28.179 20:14:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:28.179 20:14:43 -- bdev/nbd_common.sh@51 -- # local i 00:07:28.179 20:14:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.179 20:14:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:28.437 20:14:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:28.437 20:14:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:28.437 20:14:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:28.437 20:14:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.437 20:14:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.437 20:14:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:28.437 20:14:43 -- bdev/nbd_common.sh@41 -- # break 00:07:28.437 20:14:43 -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.437 20:14:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.437 20:14:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:28.697 20:14:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:28.697 20:14:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:28.697 20:14:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:28.697 20:14:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.697 20:14:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.697 20:14:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:28.697 20:14:43 -- bdev/nbd_common.sh@41 -- # break 00:07:28.697 20:14:43 -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.697 20:14:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.697 20:14:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@41 -- # break 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@41 -- # break 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.956 20:14:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:29.216 20:14:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:29.216 20:14:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:29.216 20:14:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:29.216 20:14:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.216 20:14:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.216 20:14:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:29.216 20:14:44 -- bdev/nbd_common.sh@41 -- # break 00:07:29.216 20:14:44 -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.216 20:14:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.216 20:14:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:29.476 20:14:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:29.476 20:14:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:29.476 20:14:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:29.476 20:14:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.476 20:14:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.476 20:14:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:29.477 20:14:44 -- bdev/nbd_common.sh@41 -- # break 00:07:29.477 20:14:44 -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.477 20:14:44 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:29.477 20:14:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.477 20:14:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@65 -- # true 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@65 -- # count=0 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@122 -- # count=0 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@127 -- # return 0 00:07:29.738 20:14:44 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@12 -- # local i 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:29.738 20:14:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:29.999 /dev/nbd0 00:07:29.999 20:14:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:29.999 20:14:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:29.999 20:14:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:29.999 20:14:44 -- common/autotest_common.sh@857 -- # local i 00:07:29.999 20:14:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:29.999 20:14:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:30.000 20:14:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:30.000 20:14:44 -- common/autotest_common.sh@861 -- # break 00:07:30.000 20:14:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:30.000 20:14:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:30.000 20:14:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.000 1+0 records in 00:07:30.000 1+0 records out 00:07:30.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456044 s, 9.0 MB/s 00:07:30.000 20:14:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.000 20:14:44 -- common/autotest_common.sh@874 -- # size=4096 00:07:30.000 20:14:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.000 20:14:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:30.000 20:14:44 -- common/autotest_common.sh@877 -- # return 0 00:07:30.000 20:14:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.000 20:14:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:30.000 20:14:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:30.000 /dev/nbd1 00:07:30.000 20:14:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:30.000 20:14:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:30.000 20:14:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:30.000 20:14:44 -- common/autotest_common.sh@857 -- # local i 00:07:30.000 20:14:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:30.000 20:14:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:30.000 20:14:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:30.000 20:14:44 -- common/autotest_common.sh@861 -- # break 00:07:30.000 20:14:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:30.000 20:14:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:30.000 20:14:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.000 1+0 records in 00:07:30.000 1+0 records out 00:07:30.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083082 s, 4.9 MB/s 00:07:30.000 20:14:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.000 20:14:44 -- common/autotest_common.sh@874 -- # size=4096 00:07:30.000 20:14:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.000 20:14:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:30.000 20:14:44 -- common/autotest_common.sh@877 -- # return 0 00:07:30.000 20:14:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.000 20:14:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:30.000 20:14:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:30.260 /dev/nbd10 00:07:30.260 20:14:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:30.260 20:14:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:30.260 20:14:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:07:30.260 20:14:45 -- common/autotest_common.sh@857 -- # local i 00:07:30.260 20:14:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:30.260 20:14:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:30.260 20:14:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:07:30.260 20:14:45 -- common/autotest_common.sh@861 -- # break 00:07:30.260 20:14:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:30.260 20:14:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:30.260 20:14:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.260 1+0 records in 00:07:30.260 1+0 records out 00:07:30.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660525 s, 6.2 MB/s 00:07:30.260 20:14:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.260 20:14:45 -- common/autotest_common.sh@874 -- # size=4096 00:07:30.260 20:14:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.260 20:14:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:30.260 20:14:45 -- common/autotest_common.sh@877 -- # return 0 00:07:30.260 20:14:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.260 20:14:45 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:30.260 20:14:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:30.521 /dev/nbd11 00:07:30.521 20:14:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:30.521 20:14:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:30.521 20:14:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:07:30.521 20:14:45 -- common/autotest_common.sh@857 -- # local i 00:07:30.521 20:14:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:30.521 20:14:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:30.521 20:14:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:07:30.521 20:14:45 -- common/autotest_common.sh@861 -- # break 00:07:30.521 20:14:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:30.521 20:14:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:30.521 20:14:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.521 1+0 records in 00:07:30.521 1+0 records out 00:07:30.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654677 s, 6.3 MB/s 00:07:30.521 20:14:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.521 20:14:45 -- common/autotest_common.sh@874 -- # size=4096 00:07:30.521 20:14:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.521 20:14:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:30.521 20:14:45 -- common/autotest_common.sh@877 -- # return 0 00:07:30.521 20:14:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.521 20:14:45 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:30.521 20:14:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:30.780 /dev/nbd12 00:07:30.780 20:14:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:30.780 20:14:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:30.780 20:14:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:07:30.780 20:14:45 -- common/autotest_common.sh@857 -- # local i 00:07:30.780 20:14:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:30.780 20:14:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:30.781 20:14:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:07:30.781 20:14:45 -- common/autotest_common.sh@861 -- # break 00:07:30.781 20:14:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:30.781 20:14:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:30.781 20:14:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.781 1+0 records in 00:07:30.781 1+0 records out 00:07:30.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658579 s, 6.2 MB/s 00:07:30.781 20:14:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.781 20:14:45 -- common/autotest_common.sh@874 -- # size=4096 00:07:30.781 20:14:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.781 20:14:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:30.781 20:14:45 -- common/autotest_common.sh@877 -- # return 0 00:07:30.781 20:14:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.781 20:14:45 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:30.781 20:14:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:31.041 /dev/nbd13 00:07:31.041 20:14:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:31.041 20:14:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:31.041 20:14:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:07:31.041 20:14:45 -- common/autotest_common.sh@857 -- # local i 00:07:31.041 20:14:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:31.041 20:14:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:31.041 20:14:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:07:31.041 20:14:45 -- common/autotest_common.sh@861 -- # break 00:07:31.041 20:14:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:31.041 20:14:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:31.041 20:14:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:31.041 1+0 records in 00:07:31.041 1+0 records out 00:07:31.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518956 s, 7.9 MB/s 00:07:31.041 20:14:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.041 20:14:45 -- common/autotest_common.sh@874 -- # size=4096 00:07:31.041 20:14:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.041 20:14:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:31.041 20:14:45 -- common/autotest_common.sh@877 -- # return 0 00:07:31.041 20:14:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.041 20:14:45 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:31.041 20:14:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.041 20:14:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.041 20:14:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.302 20:14:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd0", 00:07:31.303 "bdev_name": "Nvme0n1" 00:07:31.303 }, 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd1", 00:07:31.303 "bdev_name": "Nvme1n1" 00:07:31.303 }, 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd10", 00:07:31.303 "bdev_name": "Nvme2n1" 00:07:31.303 }, 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd11", 00:07:31.303 "bdev_name": "Nvme2n2" 00:07:31.303 }, 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd12", 00:07:31.303 "bdev_name": "Nvme2n3" 00:07:31.303 }, 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd13", 00:07:31.303 "bdev_name": "Nvme3n1" 00:07:31.303 } 00:07:31.303 ]' 00:07:31.303 20:14:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd0", 00:07:31.303 "bdev_name": "Nvme0n1" 00:07:31.303 }, 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd1", 00:07:31.303 "bdev_name": "Nvme1n1" 00:07:31.303 }, 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd10", 00:07:31.303 "bdev_name": "Nvme2n1" 00:07:31.303 }, 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd11", 00:07:31.303 "bdev_name": "Nvme2n2" 00:07:31.303 }, 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd12", 00:07:31.303 "bdev_name": "Nvme2n3" 00:07:31.303 }, 00:07:31.303 { 00:07:31.303 "nbd_device": "/dev/nbd13", 00:07:31.303 "bdev_name": "Nvme3n1" 00:07:31.303 } 00:07:31.303 ]' 00:07:31.303 20:14:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:31.303 /dev/nbd1 00:07:31.303 /dev/nbd10 00:07:31.303 /dev/nbd11 00:07:31.303 /dev/nbd12 00:07:31.303 /dev/nbd13' 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:31.303 /dev/nbd1 00:07:31.303 /dev/nbd10 00:07:31.303 /dev/nbd11 00:07:31.303 /dev/nbd12 00:07:31.303 /dev/nbd13' 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@65 -- # count=6 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@95 -- # count=6 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:31.303 256+0 records in 00:07:31.303 256+0 records out 00:07:31.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00564349 s, 186 MB/s 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:31.303 256+0 records in 00:07:31.303 256+0 records out 00:07:31.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124063 s, 8.5 MB/s 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.303 20:14:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:31.563 256+0 records in 00:07:31.563 256+0 records out 00:07:31.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0867128 s, 12.1 MB/s 00:07:31.563 20:14:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.563 20:14:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:31.563 256+0 records in 00:07:31.563 256+0 records out 00:07:31.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127893 s, 8.2 MB/s 00:07:31.563 20:14:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.563 20:14:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:31.824 256+0 records in 00:07:31.824 256+0 records out 00:07:31.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134113 s, 7.8 MB/s 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:31.824 256+0 records in 00:07:31.824 256+0 records out 00:07:31.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0914525 s, 11.5 MB/s 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:31.824 256+0 records in 00:07:31.824 256+0 records out 00:07:31.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.070677 s, 14.8 MB/s 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@51 -- # local i 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.824 20:14:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.084 20:14:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.084 20:14:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.084 20:14:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.084 20:14:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.084 20:14:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.084 20:14:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.084 20:14:46 -- bdev/nbd_common.sh@41 -- # break 00:07:32.084 20:14:46 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.084 20:14:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.084 20:14:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:32.344 20:14:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:32.344 20:14:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:32.344 20:14:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:32.344 20:14:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.344 20:14:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.344 20:14:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:32.344 20:14:47 -- bdev/nbd_common.sh@41 -- # break 00:07:32.344 20:14:47 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.344 20:14:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.344 20:14:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@41 -- # break 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:32.603 20:14:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.604 20:14:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.604 20:14:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:32.604 20:14:47 -- bdev/nbd_common.sh@41 -- # break 00:07:32.604 20:14:47 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.604 20:14:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.604 20:14:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:32.864 20:14:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:32.864 20:14:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:32.864 20:14:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:32.864 20:14:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.864 20:14:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.864 20:14:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:32.864 20:14:47 -- bdev/nbd_common.sh@41 -- # break 00:07:32.864 20:14:47 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.864 20:14:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.864 20:14:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@41 -- # break 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.125 20:14:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@65 -- # true 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.414 20:14:48 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:33.414 malloc_lvol_verify 00:07:33.414 20:14:48 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:33.674 8824f0e2-bf74-4e60-8153-19b57194712c 00:07:33.674 20:14:48 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:33.932 a1107fdd-a5e3-4439-a983-dc85fd6760e7 00:07:33.932 20:14:48 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:34.191 /dev/nbd0 00:07:34.191 20:14:48 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:34.191 mke2fs 1.47.0 (5-Feb-2023) 00:07:34.191 Discarding device blocks: 0/4096 done 00:07:34.191 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:34.191 00:07:34.191 Allocating group tables: 0/1 done 00:07:34.191 Writing inode tables: 0/1 done 00:07:34.191 Creating journal (1024 blocks): done 00:07:34.191 Writing superblocks and filesystem accounting information: 0/1 done 00:07:34.191 00:07:34.191 20:14:48 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:34.191 20:14:48 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:34.191 20:14:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.191 20:14:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:34.191 20:14:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:34.191 20:14:48 -- bdev/nbd_common.sh@51 -- # local i 00:07:34.191 20:14:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.191 20:14:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:34.450 20:14:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:34.450 20:14:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:34.450 20:14:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:34.450 20:14:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.450 20:14:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.450 20:14:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:34.450 20:14:49 -- bdev/nbd_common.sh@41 -- # break 00:07:34.450 20:14:49 -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.450 20:14:49 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:34.450 20:14:49 -- bdev/nbd_common.sh@147 -- # return 0 00:07:34.450 20:14:49 -- bdev/blockdev.sh@324 -- # killprocess 60318 00:07:34.450 20:14:49 -- common/autotest_common.sh@926 -- # '[' -z 60318 ']' 00:07:34.450 20:14:49 -- common/autotest_common.sh@930 -- # kill -0 60318 00:07:34.450 20:14:49 -- common/autotest_common.sh@931 -- # uname 00:07:34.450 20:14:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:34.450 20:14:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60318 00:07:34.450 20:14:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:34.451 20:14:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:34.451 killing process with pid 60318 00:07:34.451 20:14:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60318' 00:07:34.451 20:14:49 -- common/autotest_common.sh@945 -- # kill 60318 00:07:34.451 20:14:49 -- common/autotest_common.sh@950 -- # wait 60318 00:07:35.016 20:14:49 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:35.016 00:07:35.016 real 0m9.247s 00:07:35.016 user 0m13.163s 00:07:35.016 sys 0m2.903s 00:07:35.016 20:14:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.016 20:14:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.016 ************************************ 00:07:35.016 END TEST bdev_nbd 00:07:35.016 ************************************ 00:07:35.016 20:14:49 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:35.016 skipping fio tests on NVMe due to multi-ns failures. 00:07:35.016 20:14:49 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:35.016 20:14:49 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:35.016 20:14:49 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:35.017 20:14:49 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:35.017 20:14:49 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:07:35.017 20:14:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.017 20:14:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.017 ************************************ 00:07:35.017 START TEST bdev_verify 00:07:35.017 ************************************ 00:07:35.017 20:14:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:35.277 [2024-10-16 20:14:49.982928] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:35.277 [2024-10-16 20:14:49.983059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60688 ] 00:07:35.277 [2024-10-16 20:14:50.129876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:35.535 [2024-10-16 20:14:50.273891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.535 [2024-10-16 20:14:50.273910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.101 Running I/O for 5 seconds... 00:07:41.363 00:07:41.363 Latency(us) 00:07:41.363 [2024-10-16T20:14:56.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0x0 length 0xbd0bd 00:07:41.363 Nvme0n1 : 5.04 2894.15 11.31 0.00 0.00 44114.93 7259.37 48194.17 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:41.363 Nvme0n1 : 5.04 2877.30 11.24 0.00 0.00 44377.84 5318.50 49605.71 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0x0 length 0xa0000 00:07:41.363 Nvme1n1 : 5.04 2893.34 11.30 0.00 0.00 44103.39 7713.08 49807.36 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0xa0000 length 0xa0000 00:07:41.363 Nvme1n1 : 5.05 2875.79 11.23 0.00 0.00 44330.42 6906.49 49202.41 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0x0 length 0x80000 00:07:41.363 Nvme2n1 : 5.04 2891.80 11.30 0.00 0.00 44079.79 8116.38 52832.10 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0x80000 length 0x80000 00:07:41.363 Nvme2n1 : 5.05 2874.02 11.23 0.00 0.00 44278.30 8721.33 47790.87 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0x0 length 0x80000 00:07:41.363 Nvme2n2 : 5.05 2890.03 11.29 0.00 0.00 44049.36 9830.40 51420.55 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0x80000 length 0x80000 00:07:41.363 Nvme2n2 : 5.05 2879.42 11.25 0.00 0.00 44195.09 2041.70 48194.17 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0x0 length 0x80000 00:07:41.363 Nvme2n3 : 5.05 2895.65 11.31 0.00 0.00 43972.07 1046.06 50009.01 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0x80000 length 0x80000 00:07:41.363 Nvme2n3 : 5.06 2877.75 11.24 0.00 0.00 44171.26 3856.54 49404.06 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0x0 length 0x20000 00:07:41.363 Nvme3n1 : 5.05 2894.95 11.31 0.00 0.00 43942.18 1751.83 47185.92 00:07:41.363 [2024-10-16T20:14:56.292Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:41.363 Verification LBA range: start 0x20000 length 0x20000 00:07:41.363 Nvme3n1 : 5.06 2877.02 11.24 0.00 0.00 44154.28 4486.70 50009.01 00:07:41.363 [2024-10-16T20:14:56.292Z] =================================================================================================================== 00:07:41.363 [2024-10-16T20:14:56.292Z] Total : 34621.22 135.24 0.00 0.00 44147.06 1046.06 52832.10 00:08:20.068 00:08:20.068 real 0m38.443s 00:08:20.068 user 1m15.576s 00:08:20.068 sys 0m0.355s 00:08:20.068 20:15:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.068 20:15:28 -- common/autotest_common.sh@10 -- # set +x 00:08:20.068 ************************************ 00:08:20.068 END TEST bdev_verify 00:08:20.068 ************************************ 00:08:20.068 20:15:28 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:20.068 20:15:28 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:08:20.068 20:15:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.068 20:15:28 -- common/autotest_common.sh@10 -- # set +x 00:08:20.068 ************************************ 00:08:20.068 START TEST bdev_verify_big_io 00:08:20.068 ************************************ 00:08:20.068 20:15:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:20.068 [2024-10-16 20:15:28.465964] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:20.068 [2024-10-16 20:15:28.466092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61123 ] 00:08:20.068 [2024-10-16 20:15:28.614025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.068 [2024-10-16 20:15:28.802543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.068 [2024-10-16 20:15:28.802728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.068 Running I/O for 5 seconds... 00:08:20.068 00:08:20.068 Latency(us) 00:08:20.068 [2024-10-16T20:15:34.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0x0 length 0xbd0b 00:08:20.068 Nvme0n1 : 5.41 208.62 13.04 0.00 0.00 599855.47 30650.68 825955.25 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:20.068 Nvme0n1 : 5.40 217.46 13.59 0.00 0.00 577379.32 26819.35 790464.98 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0x0 length 0xa000 00:08:20.068 Nvme1n1 : 5.43 216.58 13.54 0.00 0.00 573251.29 11342.77 754974.72 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0xa000 length 0xa000 00:08:20.068 Nvme1n1 : 5.42 223.88 13.99 0.00 0.00 556456.37 16837.71 738842.78 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0x0 length 0x8000 00:08:20.068 Nvme2n1 : 5.43 216.51 13.53 0.00 0.00 563074.48 11998.13 680767.80 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0x8000 length 0x8000 00:08:20.068 Nvme2n1 : 5.42 223.81 13.99 0.00 0.00 546963.49 17442.66 671088.64 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0x0 length 0x8000 00:08:20.068 Nvme2n2 : 5.43 216.43 13.53 0.00 0.00 552927.98 13107.20 606560.89 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0x8000 length 0x8000 00:08:20.068 Nvme2n2 : 5.43 223.73 13.98 0.00 0.00 537506.57 18249.26 603334.50 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0x0 length 0x8000 00:08:20.068 Nvme2n3 : 5.46 222.48 13.91 0.00 0.00 527690.75 25710.28 532353.97 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0x8000 length 0x8000 00:08:20.068 Nvme2n3 : 5.46 229.57 14.35 0.00 0.00 513949.65 29239.14 532353.97 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0x0 length 0x2000 00:08:20.068 Nvme3n1 : 5.50 252.86 15.80 0.00 0.00 457675.75 523.03 509769.26 00:08:20.068 [2024-10-16T20:15:34.997Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.068 Verification LBA range: start 0x2000 length 0x2000 00:08:20.068 Nvme3n1 : 5.50 253.04 15.82 0.00 0.00 460352.61 1663.61 506542.87 00:08:20.068 [2024-10-16T20:15:34.997Z] =================================================================================================================== 00:08:20.068 [2024-10-16T20:15:34.997Z] Total : 2704.96 169.06 0.00 0.00 536289.97 523.03 825955.25 00:08:21.967 00:08:21.967 real 0m8.409s 00:08:21.967 user 0m15.700s 00:08:21.967 sys 0m0.240s 00:08:21.967 20:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.967 ************************************ 00:08:21.967 END TEST bdev_verify_big_io 00:08:21.967 ************************************ 00:08:21.967 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.967 20:15:36 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:21.967 20:15:36 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:21.967 20:15:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.967 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.967 ************************************ 00:08:21.967 START TEST bdev_write_zeroes 00:08:21.967 ************************************ 00:08:21.967 20:15:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:22.225 [2024-10-16 20:15:36.933544] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:22.225 [2024-10-16 20:15:36.933655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61232 ] 00:08:22.225 [2024-10-16 20:15:37.081858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.483 [2024-10-16 20:15:37.267388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.048 Running I/O for 1 seconds... 00:08:23.980 00:08:23.980 Latency(us) 00:08:23.980 [2024-10-16T20:15:38.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.980 [2024-10-16T20:15:38.909Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.980 Nvme0n1 : 1.01 10645.05 41.58 0.00 0.00 11989.85 8166.79 29440.79 00:08:23.980 [2024-10-16T20:15:38.909Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.980 Nvme1n1 : 1.01 10631.85 41.53 0.00 0.00 11987.91 8771.74 29642.44 00:08:23.980 [2024-10-16T20:15:38.909Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.980 Nvme2n1 : 1.02 10675.74 41.70 0.00 0.00 11828.11 6024.27 22887.19 00:08:23.980 [2024-10-16T20:15:38.909Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.980 Nvme2n2 : 1.02 10691.84 41.77 0.00 0.00 11783.12 6604.01 21273.99 00:08:23.980 [2024-10-16T20:15:38.909Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.980 Nvme2n3 : 1.02 10679.87 41.72 0.00 0.00 11764.76 6175.51 19963.27 00:08:23.980 [2024-10-16T20:15:38.909Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.980 Nvme3n1 : 1.03 10721.60 41.88 0.00 0.00 11713.15 5066.44 19963.27 00:08:23.980 [2024-10-16T20:15:38.909Z] =================================================================================================================== 00:08:23.980 [2024-10-16T20:15:38.909Z] Total : 64045.95 250.18 0.00 0.00 11843.52 5066.44 29642.44 00:08:24.913 00:08:24.913 real 0m2.798s 00:08:24.913 user 0m2.484s 00:08:24.913 sys 0m0.190s 00:08:24.913 20:15:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.913 ************************************ 00:08:24.913 END TEST bdev_write_zeroes 00:08:24.913 ************************************ 00:08:24.913 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:08:24.913 20:15:39 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.913 20:15:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:24.913 20:15:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.913 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:08:24.913 ************************************ 00:08:24.913 START TEST bdev_json_nonenclosed 00:08:24.913 ************************************ 00:08:24.913 20:15:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.913 [2024-10-16 20:15:39.769809] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:24.913 [2024-10-16 20:15:39.769925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61280 ] 00:08:25.171 [2024-10-16 20:15:39.917234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.429 [2024-10-16 20:15:40.121489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.429 [2024-10-16 20:15:40.121645] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:25.429 [2024-10-16 20:15:40.121664] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.687 00:08:25.687 real 0m0.689s 00:08:25.687 user 0m0.489s 00:08:25.687 sys 0m0.094s 00:08:25.687 20:15:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.687 ************************************ 00:08:25.687 END TEST bdev_json_nonenclosed 00:08:25.687 ************************************ 00:08:25.687 20:15:40 -- common/autotest_common.sh@10 -- # set +x 00:08:25.687 20:15:40 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.687 20:15:40 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:25.687 20:15:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.687 20:15:40 -- common/autotest_common.sh@10 -- # set +x 00:08:25.687 ************************************ 00:08:25.687 START TEST bdev_json_nonarray 00:08:25.687 ************************************ 00:08:25.687 20:15:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.687 [2024-10-16 20:15:40.511861] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:25.687 [2024-10-16 20:15:40.512005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61311 ] 00:08:25.944 [2024-10-16 20:15:40.661389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.944 [2024-10-16 20:15:40.841537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.944 [2024-10-16 20:15:40.841694] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:25.944 [2024-10-16 20:15:40.841712] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.204 00:08:26.204 real 0m0.672s 00:08:26.205 user 0m0.462s 00:08:26.205 sys 0m0.096s 00:08:26.205 ************************************ 00:08:26.205 20:15:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.205 20:15:41 -- common/autotest_common.sh@10 -- # set +x 00:08:26.205 END TEST bdev_json_nonarray 00:08:26.205 ************************************ 00:08:26.565 20:15:41 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:26.565 20:15:41 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:26.565 20:15:41 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:26.566 20:15:41 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:26.566 20:15:41 -- bdev/blockdev.sh@809 -- # cleanup 00:08:26.566 20:15:41 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:26.566 20:15:41 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:26.566 20:15:41 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:26.566 20:15:41 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:26.566 20:15:41 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:26.566 20:15:41 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:26.566 00:08:26.566 real 1m7.768s 00:08:26.566 user 1m58.138s 00:08:26.566 sys 0m4.951s 00:08:26.566 20:15:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.566 ************************************ 00:08:26.566 END TEST blockdev_nvme 00:08:26.566 ************************************ 00:08:26.566 20:15:41 -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 20:15:41 -- spdk/autotest.sh@219 -- # uname -s 00:08:26.566 20:15:41 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:08:26.566 20:15:41 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:26.566 20:15:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:26.566 20:15:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.566 20:15:41 -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 ************************************ 00:08:26.566 START TEST blockdev_nvme_gpt 00:08:26.566 ************************************ 00:08:26.566 20:15:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:26.566 * Looking for test storage... 00:08:26.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:26.566 20:15:41 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:26.566 20:15:41 -- bdev/nbd_common.sh@6 -- # set -e 00:08:26.566 20:15:41 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:26.566 20:15:41 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:26.566 20:15:41 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:26.566 20:15:41 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:26.566 20:15:41 -- bdev/blockdev.sh@18 -- # : 00:08:26.566 20:15:41 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:26.566 20:15:41 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:26.566 20:15:41 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:26.566 20:15:41 -- bdev/blockdev.sh@672 -- # uname -s 00:08:26.566 20:15:41 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:26.566 20:15:41 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:26.566 20:15:41 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:26.566 20:15:41 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:26.566 20:15:41 -- bdev/blockdev.sh@682 -- # dek= 00:08:26.566 20:15:41 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:26.566 20:15:41 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:26.566 20:15:41 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:26.566 20:15:41 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:26.566 20:15:41 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:26.566 20:15:41 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:26.566 20:15:41 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61386 00:08:26.566 20:15:41 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:26.566 20:15:41 -- bdev/blockdev.sh@47 -- # waitforlisten 61386 00:08:26.566 20:15:41 -- common/autotest_common.sh@819 -- # '[' -z 61386 ']' 00:08:26.566 20:15:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.566 20:15:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:26.566 20:15:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.566 20:15:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:26.566 20:15:41 -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 20:15:41 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:26.566 [2024-10-16 20:15:41.393824] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:26.566 [2024-10-16 20:15:41.393939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61386 ] 00:08:26.824 [2024-10-16 20:15:41.541325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.824 [2024-10-16 20:15:41.718692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:26.824 [2024-10-16 20:15:41.718903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.197 20:15:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:28.197 20:15:42 -- common/autotest_common.sh@852 -- # return 0 00:08:28.197 20:15:42 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:28.197 20:15:42 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:28.197 20:15:42 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:28.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:28.455 Waiting for block devices as requested 00:08:28.455 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.713 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.713 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.713 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:33.972 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:33.972 20:15:48 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:33.972 20:15:48 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:08:33.972 20:15:48 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:08:33.972 20:15:48 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:08:33.972 20:15:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:33.972 20:15:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:08:33.973 20:15:48 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:08:33.973 20:15:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:33.973 20:15:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:08:33.973 20:15:48 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:08:33.973 20:15:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:33.973 20:15:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:08:33.973 20:15:48 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:08:33.973 20:15:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:33.973 20:15:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:08:33.973 20:15:48 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:08:33.973 20:15:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:33.973 20:15:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:08:33.973 20:15:48 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:08:33.973 20:15:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:33.973 20:15:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:08:33.973 20:15:48 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:08:33.973 20:15:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:33.973 20:15:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:08:33.973 20:15:48 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:08:33.973 20:15:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:33.973 20:15:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:33.973 20:15:48 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:33.973 20:15:48 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:33.973 20:15:48 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:33.973 20:15:48 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:33.973 20:15:48 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:33.973 20:15:48 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:33.973 20:15:48 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:33.973 20:15:48 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:33.973 BYT; 00:08:33.973 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:33.973 20:15:48 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:33.973 BYT; 00:08:33.973 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:33.973 20:15:48 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:33.973 20:15:48 -- bdev/blockdev.sh@114 -- # break 00:08:33.973 20:15:48 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:33.973 20:15:48 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:33.973 20:15:48 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:33.973 20:15:48 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:33.973 20:15:48 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:33.973 20:15:48 -- scripts/common.sh@410 -- # local spdk_guid 00:08:33.973 20:15:48 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:33.973 20:15:48 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.973 20:15:48 -- scripts/common.sh@415 -- # IFS='()' 00:08:33.973 20:15:48 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:33.973 20:15:48 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.973 20:15:48 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:33.973 20:15:48 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.973 20:15:48 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.973 20:15:48 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.973 20:15:48 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:33.973 20:15:48 -- scripts/common.sh@422 -- # local spdk_guid 00:08:33.973 20:15:48 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:33.973 20:15:48 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.973 20:15:48 -- scripts/common.sh@427 -- # IFS='()' 00:08:33.973 20:15:48 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:33.973 20:15:48 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.973 20:15:48 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:33.973 20:15:48 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.973 20:15:48 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.973 20:15:48 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.973 20:15:48 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:34.906 The operation has completed successfully. 00:08:34.906 20:15:49 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:35.838 The operation has completed successfully. 00:08:35.838 20:15:50 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:36.771 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.771 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.771 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.771 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.771 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.771 20:15:51 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:36.771 20:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.771 20:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:36.771 [] 00:08:36.771 20:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.771 20:15:51 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:36.771 20:15:51 -- bdev/blockdev.sh@79 -- # local json 00:08:36.771 20:15:51 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:36.771 20:15:51 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.028 20:15:51 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:37.028 20:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.028 20:15:51 -- common/autotest_common.sh@10 -- # set +x 00:08:37.287 20:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.287 20:15:51 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:37.287 20:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.287 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:08:37.287 20:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.287 20:15:52 -- bdev/blockdev.sh@738 -- # cat 00:08:37.287 20:15:52 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:37.287 20:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.287 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:08:37.287 20:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.287 20:15:52 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:37.287 20:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.287 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:08:37.287 20:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.287 20:15:52 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:37.287 20:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.287 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:08:37.287 20:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.287 20:15:52 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:37.287 20:15:52 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:37.287 20:15:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.287 20:15:52 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:37.287 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:08:37.287 20:15:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.287 20:15:52 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:37.287 20:15:52 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:37.288 20:15:52 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "a4b4b66b-ac15-4838-a52e-8bd501e08f73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a4b4b66b-ac15-4838-a52e-8bd501e08f73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a910ac63-0e37-4e3c-9c9f-ba5e2aa27869"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a910ac63-0e37-4e3c-9c9f-ba5e2aa27869",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8c6f5536-be18-4d9a-92ab-2b30fb7eabf8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c6f5536-be18-4d9a-92ab-2b30fb7eabf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0399f1bf-62f3-462b-ad3f-69c3bee5ee74"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0399f1bf-62f3-462b-ad3f-69c3bee5ee74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9516208f-6afd-4923-842e-26afa9365c5f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9516208f-6afd-4923-842e-26afa9365c5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:37.288 20:15:52 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:37.288 20:15:52 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:37.288 20:15:52 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:37.288 20:15:52 -- bdev/blockdev.sh@752 -- # killprocess 61386 00:08:37.288 20:15:52 -- common/autotest_common.sh@926 -- # '[' -z 61386 ']' 00:08:37.288 20:15:52 -- common/autotest_common.sh@930 -- # kill -0 61386 00:08:37.288 20:15:52 -- common/autotest_common.sh@931 -- # uname 00:08:37.288 20:15:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:37.288 20:15:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61386 00:08:37.288 20:15:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:37.288 killing process with pid 61386 00:08:37.288 20:15:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:37.288 20:15:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61386' 00:08:37.288 20:15:52 -- common/autotest_common.sh@945 -- # kill 61386 00:08:37.288 20:15:52 -- common/autotest_common.sh@950 -- # wait 61386 00:08:39.252 20:15:53 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:39.252 20:15:53 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:39.252 20:15:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:39.252 20:15:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.252 20:15:53 -- common/autotest_common.sh@10 -- # set +x 00:08:39.252 ************************************ 00:08:39.252 START TEST bdev_hello_world 00:08:39.252 ************************************ 00:08:39.252 20:15:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:39.252 [2024-10-16 20:15:53.722596] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:39.252 [2024-10-16 20:15:53.722701] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62036 ] 00:08:39.252 [2024-10-16 20:15:53.869893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.252 [2024-10-16 20:15:54.006374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.819 [2024-10-16 20:15:54.471842] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:39.819 [2024-10-16 20:15:54.471879] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:39.819 [2024-10-16 20:15:54.471894] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:39.819 [2024-10-16 20:15:54.473817] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:39.819 [2024-10-16 20:15:54.474242] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:39.819 [2024-10-16 20:15:54.474268] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:39.819 [2024-10-16 20:15:54.474492] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:39.819 00:08:39.819 [2024-10-16 20:15:54.474517] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:40.384 00:08:40.384 real 0m1.423s 00:08:40.384 user 0m1.141s 00:08:40.384 sys 0m0.175s 00:08:40.384 20:15:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.385 20:15:55 -- common/autotest_common.sh@10 -- # set +x 00:08:40.385 ************************************ 00:08:40.385 END TEST bdev_hello_world 00:08:40.385 ************************************ 00:08:40.385 20:15:55 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:40.385 20:15:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:40.385 20:15:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.385 20:15:55 -- common/autotest_common.sh@10 -- # set +x 00:08:40.385 ************************************ 00:08:40.385 START TEST bdev_bounds 00:08:40.385 ************************************ 00:08:40.385 20:15:55 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:08:40.385 20:15:55 -- bdev/blockdev.sh@288 -- # bdevio_pid=62072 00:08:40.385 20:15:55 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:40.385 Process bdevio pid: 62072 00:08:40.385 20:15:55 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 62072' 00:08:40.385 20:15:55 -- bdev/blockdev.sh@291 -- # waitforlisten 62072 00:08:40.385 20:15:55 -- common/autotest_common.sh@819 -- # '[' -z 62072 ']' 00:08:40.385 20:15:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.385 20:15:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:40.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.385 20:15:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.385 20:15:55 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:40.385 20:15:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:40.385 20:15:55 -- common/autotest_common.sh@10 -- # set +x 00:08:40.385 [2024-10-16 20:15:55.185661] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:40.385 [2024-10-16 20:15:55.185773] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62072 ] 00:08:40.642 [2024-10-16 20:15:55.331344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.642 [2024-10-16 20:15:55.474833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.642 [2024-10-16 20:15:55.475128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.642 [2024-10-16 20:15:55.475236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.210 20:15:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:41.210 20:15:56 -- common/autotest_common.sh@852 -- # return 0 00:08:41.210 20:15:56 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:41.210 I/O targets: 00:08:41.210 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:41.210 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:41.210 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:41.210 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:41.210 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:41.210 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:41.210 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:41.210 00:08:41.210 00:08:41.210 CUnit - A unit testing framework for C - Version 2.1-3 00:08:41.210 http://cunit.sourceforge.net/ 00:08:41.210 00:08:41.210 00:08:41.210 Suite: bdevio tests on: Nvme3n1 00:08:41.210 Test: blockdev write read block ...passed 00:08:41.210 Test: blockdev write zeroes read block ...passed 00:08:41.210 Test: blockdev write zeroes read no split ...passed 00:08:41.210 Test: blockdev write zeroes read split ...passed 00:08:41.210 Test: blockdev write zeroes read split partial ...passed 00:08:41.210 Test: blockdev reset ...[2024-10-16 20:15:56.138098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:41.469 [2024-10-16 20:15:56.140506] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.469 passed 00:08:41.469 Test: blockdev write read 8 blocks ...passed 00:08:41.469 Test: blockdev write read size > 128k ...passed 00:08:41.469 Test: blockdev write read invalid size ...passed 00:08:41.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.469 Test: blockdev write read max offset ...passed 00:08:41.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.469 Test: blockdev writev readv 8 blocks ...passed 00:08:41.469 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.469 Test: blockdev writev readv block ...passed 00:08:41.469 Test: blockdev writev readv size > 128k ...passed 00:08:41.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.469 Test: blockdev comparev and writev ...[2024-10-16 20:15:56.148996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f00a000 len:0x1000 00:08:41.469 [2024-10-16 20:15:56.149055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.469 passed 00:08:41.469 Test: blockdev nvme passthru rw ...passed 00:08:41.469 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.469 Test: blockdev nvme admin passthru ...[2024-10-16 20:15:56.149975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.469 [2024-10-16 20:15:56.149995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.469 passed 00:08:41.469 Test: blockdev copy ...passed 00:08:41.469 Suite: bdevio tests on: Nvme2n3 00:08:41.469 Test: blockdev write read block ...passed 00:08:41.469 Test: blockdev write zeroes read block ...passed 00:08:41.469 Test: blockdev write zeroes read no split ...passed 00:08:41.469 Test: blockdev write zeroes read split ...passed 00:08:41.469 Test: blockdev write zeroes read split partial ...passed 00:08:41.469 Test: blockdev reset ...[2024-10-16 20:15:56.207967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:41.469 [2024-10-16 20:15:56.210504] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.469 passed 00:08:41.469 Test: blockdev write read 8 blocks ...passed 00:08:41.469 Test: blockdev write read size > 128k ...passed 00:08:41.469 Test: blockdev write read invalid size ...passed 00:08:41.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.469 Test: blockdev write read max offset ...passed 00:08:41.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.469 Test: blockdev writev readv 8 blocks ...passed 00:08:41.469 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.469 Test: blockdev writev readv block ...passed 00:08:41.469 Test: blockdev writev readv size > 128k ...passed 00:08:41.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.469 Test: blockdev comparev and writev ...[2024-10-16 20:15:56.218844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x266f04000 len:0x1000 00:08:41.469 [2024-10-16 20:15:56.218887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.469 passed 00:08:41.470 Test: blockdev nvme passthru rw ...passed 00:08:41.470 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.470 Test: blockdev nvme admin passthru ...[2024-10-16 20:15:56.219858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.470 [2024-10-16 20:15:56.219883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.470 passed 00:08:41.470 Test: blockdev copy ...passed 00:08:41.470 Suite: bdevio tests on: Nvme2n2 00:08:41.470 Test: blockdev write read block ...passed 00:08:41.470 Test: blockdev write zeroes read block ...passed 00:08:41.470 Test: blockdev write zeroes read no split ...passed 00:08:41.470 Test: blockdev write zeroes read split ...passed 00:08:41.470 Test: blockdev write zeroes read split partial ...passed 00:08:41.470 Test: blockdev reset ...[2024-10-16 20:15:56.276297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:41.470 [2024-10-16 20:15:56.278807] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.470 passed 00:08:41.470 Test: blockdev write read 8 blocks ...passed 00:08:41.470 Test: blockdev write read size > 128k ...passed 00:08:41.470 Test: blockdev write read invalid size ...passed 00:08:41.470 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.470 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.470 Test: blockdev write read max offset ...passed 00:08:41.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.470 Test: blockdev writev readv 8 blocks ...passed 00:08:41.470 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.470 Test: blockdev writev readv block ...passed 00:08:41.470 Test: blockdev writev readv size > 128k ...passed 00:08:41.470 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.470 Test: blockdev comparev and writev ...[2024-10-16 20:15:56.286762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x266f04000 len:0x1000 00:08:41.470 [2024-10-16 20:15:56.286800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.470 passed 00:08:41.470 Test: blockdev nvme passthru rw ...passed 00:08:41.470 Test: blockdev nvme passthru vendor specific ...[2024-10-16 20:15:56.287599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.470 [2024-10-16 20:15:56.287618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.470 passed 00:08:41.470 Test: blockdev nvme admin passthru ...passed 00:08:41.470 Test: blockdev copy ...passed 00:08:41.470 Suite: bdevio tests on: Nvme2n1 00:08:41.470 Test: blockdev write read block ...passed 00:08:41.470 Test: blockdev write zeroes read block ...passed 00:08:41.470 Test: blockdev write zeroes read no split ...passed 00:08:41.470 Test: blockdev write zeroes read split ...passed 00:08:41.470 Test: blockdev write zeroes read split partial ...passed 00:08:41.470 Test: blockdev reset ...[2024-10-16 20:15:56.344097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:41.470 [2024-10-16 20:15:56.346636] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.470 passed 00:08:41.470 Test: blockdev write read 8 blocks ...passed 00:08:41.470 Test: blockdev write read size > 128k ...passed 00:08:41.470 Test: blockdev write read invalid size ...passed 00:08:41.470 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.470 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.470 Test: blockdev write read max offset ...passed 00:08:41.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.470 Test: blockdev writev readv 8 blocks ...passed 00:08:41.470 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.470 Test: blockdev writev readv block ...passed 00:08:41.470 Test: blockdev writev readv size > 128k ...passed 00:08:41.470 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.470 Test: blockdev comparev and writev ...[2024-10-16 20:15:56.355139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27ac3c000 len:0x1000 00:08:41.470 [2024-10-16 20:15:56.355184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.470 passed 00:08:41.470 Test: blockdev nvme passthru rw ...passed 00:08:41.470 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.470 Test: blockdev nvme admin passthru ...[2024-10-16 20:15:56.356009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.470 [2024-10-16 20:15:56.356033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.470 passed 00:08:41.470 Test: blockdev copy ...passed 00:08:41.470 Suite: bdevio tests on: Nvme1n1 00:08:41.470 Test: blockdev write read block ...passed 00:08:41.470 Test: blockdev write zeroes read block ...passed 00:08:41.470 Test: blockdev write zeroes read no split ...passed 00:08:41.470 Test: blockdev write zeroes read split ...passed 00:08:41.728 Test: blockdev write zeroes read split partial ...passed 00:08:41.728 Test: blockdev reset ...[2024-10-16 20:15:56.411436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:41.728 [2024-10-16 20:15:56.413780] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.728 passed 00:08:41.728 Test: blockdev write read 8 blocks ...passed 00:08:41.728 Test: blockdev write read size > 128k ...passed 00:08:41.728 Test: blockdev write read invalid size ...passed 00:08:41.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.728 Test: blockdev write read max offset ...passed 00:08:41.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.728 Test: blockdev writev readv 8 blocks ...passed 00:08:41.728 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.728 Test: blockdev writev readv block ...passed 00:08:41.728 Test: blockdev writev readv size > 128k ...passed 00:08:41.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.728 Test: blockdev comparev and writev ...[2024-10-16 20:15:56.421325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27ac38000 len:0x1000 00:08:41.728 [2024-10-16 20:15:56.421364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.728 passed 00:08:41.728 Test: blockdev nvme passthru rw ...passed 00:08:41.728 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.728 Test: blockdev nvme admin passthru ...[2024-10-16 20:15:56.422029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:41.728 [2024-10-16 20:15:56.422064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:41.728 passed 00:08:41.728 Test: blockdev copy ...passed 00:08:41.728 Suite: bdevio tests on: Nvme0n1p2 00:08:41.728 Test: blockdev write read block ...passed 00:08:41.728 Test: blockdev write zeroes read block ...passed 00:08:41.728 Test: blockdev write zeroes read no split ...passed 00:08:41.728 Test: blockdev write zeroes read split ...passed 00:08:41.728 Test: blockdev write zeroes read split partial ...passed 00:08:41.728 Test: blockdev reset ...[2024-10-16 20:15:56.481030] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:41.728 [2024-10-16 20:15:56.483595] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.728 passed 00:08:41.728 Test: blockdev write read 8 blocks ...passed 00:08:41.728 Test: blockdev write read size > 128k ...passed 00:08:41.728 Test: blockdev write read invalid size ...passed 00:08:41.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.728 Test: blockdev write read max offset ...passed 00:08:41.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.728 Test: blockdev writev readv 8 blocks ...passed 00:08:41.728 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.728 Test: blockdev writev readv block ...passed 00:08:41.728 Test: blockdev writev readv size > 128k ...passed 00:08:41.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.728 Test: blockdev comparev and writev ...[2024-10-16 20:15:56.490111] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:41.728 separate metadata which is not supported yet. 00:08:41.728 passed 00:08:41.728 Test: blockdev nvme passthru rw ...passed 00:08:41.728 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.728 Test: blockdev nvme admin passthru ...passed 00:08:41.728 Test: blockdev copy ...passed 00:08:41.728 Suite: bdevio tests on: Nvme0n1p1 00:08:41.728 Test: blockdev write read block ...passed 00:08:41.728 Test: blockdev write zeroes read block ...passed 00:08:41.728 Test: blockdev write zeroes read no split ...passed 00:08:41.728 Test: blockdev write zeroes read split ...passed 00:08:41.728 Test: blockdev write zeroes read split partial ...passed 00:08:41.728 Test: blockdev reset ...[2024-10-16 20:15:56.536331] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:41.728 [2024-10-16 20:15:56.538644] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.728 passed 00:08:41.728 Test: blockdev write read 8 blocks ...passed 00:08:41.728 Test: blockdev write read size > 128k ...passed 00:08:41.728 Test: blockdev write read invalid size ...passed 00:08:41.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.728 Test: blockdev write read max offset ...passed 00:08:41.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.728 Test: blockdev writev readv 8 blocks ...passed 00:08:41.728 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.728 Test: blockdev writev readv block ...passed 00:08:41.728 Test: blockdev writev readv size > 128k ...passed 00:08:41.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.728 Test: blockdev comparev and writev ...[2024-10-16 20:15:56.545258] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:41.728 separate metadata which is not supported yet. 00:08:41.728 passed 00:08:41.728 Test: blockdev nvme passthru rw ...passed 00:08:41.728 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.728 Test: blockdev nvme admin passthru ...passed 00:08:41.728 Test: blockdev copy ...passed 00:08:41.728 00:08:41.728 Run Summary: Type Total Ran Passed Failed Inactive 00:08:41.728 suites 7 7 n/a 0 0 00:08:41.728 tests 161 161 161 0 0 00:08:41.728 asserts 1006 1006 1006 0 n/a 00:08:41.728 00:08:41.728 Elapsed time = 1.211 seconds 00:08:41.728 0 00:08:41.728 20:15:56 -- bdev/blockdev.sh@293 -- # killprocess 62072 00:08:41.728 20:15:56 -- common/autotest_common.sh@926 -- # '[' -z 62072 ']' 00:08:41.728 20:15:56 -- common/autotest_common.sh@930 -- # kill -0 62072 00:08:41.728 20:15:56 -- common/autotest_common.sh@931 -- # uname 00:08:41.728 20:15:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:41.728 20:15:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62072 00:08:41.728 20:15:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:41.728 20:15:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:41.728 killing process with pid 62072 00:08:41.728 20:15:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62072' 00:08:41.728 20:15:56 -- common/autotest_common.sh@945 -- # kill 62072 00:08:41.728 20:15:56 -- common/autotest_common.sh@950 -- # wait 62072 00:08:42.295 20:15:57 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:42.295 00:08:42.295 real 0m2.023s 00:08:42.295 user 0m4.990s 00:08:42.295 sys 0m0.268s 00:08:42.295 ************************************ 00:08:42.295 END TEST bdev_bounds 00:08:42.295 ************************************ 00:08:42.295 20:15:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.295 20:15:57 -- common/autotest_common.sh@10 -- # set +x 00:08:42.295 20:15:57 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:42.295 20:15:57 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:42.295 20:15:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.295 20:15:57 -- common/autotest_common.sh@10 -- # set +x 00:08:42.295 ************************************ 00:08:42.295 START TEST bdev_nbd 00:08:42.295 ************************************ 00:08:42.295 20:15:57 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:42.295 20:15:57 -- bdev/blockdev.sh@298 -- # uname -s 00:08:42.295 20:15:57 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:42.295 20:15:57 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.295 20:15:57 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:42.295 20:15:57 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:42.295 20:15:57 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:42.295 20:15:57 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:42.295 20:15:57 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:42.295 20:15:57 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:42.295 20:15:57 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:42.295 20:15:57 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:42.295 20:15:57 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:42.295 20:15:57 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:42.295 20:15:57 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:42.295 20:15:57 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:42.295 20:15:57 -- bdev/blockdev.sh@316 -- # nbd_pid=62126 00:08:42.295 20:15:57 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:42.295 20:15:57 -- bdev/blockdev.sh@318 -- # waitforlisten 62126 /var/tmp/spdk-nbd.sock 00:08:42.295 20:15:57 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:42.295 20:15:57 -- common/autotest_common.sh@819 -- # '[' -z 62126 ']' 00:08:42.295 20:15:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:42.295 20:15:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:42.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:42.295 20:15:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:42.295 20:15:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:42.295 20:15:57 -- common/autotest_common.sh@10 -- # set +x 00:08:42.553 [2024-10-16 20:15:57.257451] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:42.553 [2024-10-16 20:15:57.257560] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.553 [2024-10-16 20:15:57.404808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.811 [2024-10-16 20:15:57.558157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.377 20:15:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:43.377 20:15:58 -- common/autotest_common.sh@852 -- # return 0 00:08:43.377 20:15:58 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@24 -- # local i 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:43.377 20:15:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:43.377 20:15:58 -- common/autotest_common.sh@857 -- # local i 00:08:43.377 20:15:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:43.377 20:15:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:43.377 20:15:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:43.377 20:15:58 -- common/autotest_common.sh@861 -- # break 00:08:43.377 20:15:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:43.377 20:15:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:43.377 20:15:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.377 1+0 records in 00:08:43.377 1+0 records out 00:08:43.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466103 s, 8.8 MB/s 00:08:43.377 20:15:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.377 20:15:58 -- common/autotest_common.sh@874 -- # size=4096 00:08:43.377 20:15:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.377 20:15:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:43.377 20:15:58 -- common/autotest_common.sh@877 -- # return 0 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.377 20:15:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:43.635 20:15:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:43.635 20:15:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:43.635 20:15:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:43.635 20:15:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:43.635 20:15:58 -- common/autotest_common.sh@857 -- # local i 00:08:43.635 20:15:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:43.635 20:15:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:43.635 20:15:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:43.635 20:15:58 -- common/autotest_common.sh@861 -- # break 00:08:43.635 20:15:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:43.635 20:15:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:43.635 20:15:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.635 1+0 records in 00:08:43.635 1+0 records out 00:08:43.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304338 s, 13.5 MB/s 00:08:43.635 20:15:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.635 20:15:58 -- common/autotest_common.sh@874 -- # size=4096 00:08:43.635 20:15:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.635 20:15:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:43.635 20:15:58 -- common/autotest_common.sh@877 -- # return 0 00:08:43.635 20:15:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.635 20:15:58 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.635 20:15:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:43.894 20:15:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:43.894 20:15:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:43.894 20:15:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:43.894 20:15:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:08:43.894 20:15:58 -- common/autotest_common.sh@857 -- # local i 00:08:43.894 20:15:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:43.894 20:15:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:43.894 20:15:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:08:43.894 20:15:58 -- common/autotest_common.sh@861 -- # break 00:08:43.894 20:15:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:43.894 20:15:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:43.894 20:15:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.894 1+0 records in 00:08:43.894 1+0 records out 00:08:43.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368175 s, 11.1 MB/s 00:08:43.894 20:15:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.894 20:15:58 -- common/autotest_common.sh@874 -- # size=4096 00:08:43.894 20:15:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.894 20:15:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:43.894 20:15:58 -- common/autotest_common.sh@877 -- # return 0 00:08:43.894 20:15:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.894 20:15:58 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.894 20:15:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:44.152 20:15:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:44.152 20:15:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:44.152 20:15:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:44.152 20:15:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:08:44.152 20:15:58 -- common/autotest_common.sh@857 -- # local i 00:08:44.152 20:15:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:44.152 20:15:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:44.152 20:15:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:08:44.152 20:15:58 -- common/autotest_common.sh@861 -- # break 00:08:44.152 20:15:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:44.152 20:15:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:44.152 20:15:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.152 1+0 records in 00:08:44.152 1+0 records out 00:08:44.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546257 s, 7.5 MB/s 00:08:44.152 20:15:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.152 20:15:58 -- common/autotest_common.sh@874 -- # size=4096 00:08:44.152 20:15:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.152 20:15:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:44.152 20:15:58 -- common/autotest_common.sh@877 -- # return 0 00:08:44.152 20:15:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.152 20:15:58 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.152 20:15:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:44.411 20:15:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:44.411 20:15:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:44.411 20:15:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:44.411 20:15:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:08:44.411 20:15:59 -- common/autotest_common.sh@857 -- # local i 00:08:44.411 20:15:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:44.411 20:15:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:44.411 20:15:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:08:44.411 20:15:59 -- common/autotest_common.sh@861 -- # break 00:08:44.411 20:15:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:44.411 20:15:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:44.411 20:15:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.411 1+0 records in 00:08:44.411 1+0 records out 00:08:44.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413779 s, 9.9 MB/s 00:08:44.411 20:15:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.411 20:15:59 -- common/autotest_common.sh@874 -- # size=4096 00:08:44.411 20:15:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.411 20:15:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:44.411 20:15:59 -- common/autotest_common.sh@877 -- # return 0 00:08:44.411 20:15:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.411 20:15:59 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.411 20:15:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:44.669 20:15:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:44.669 20:15:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:44.669 20:15:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:44.669 20:15:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:08:44.669 20:15:59 -- common/autotest_common.sh@857 -- # local i 00:08:44.669 20:15:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:44.669 20:15:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:44.669 20:15:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:08:44.669 20:15:59 -- common/autotest_common.sh@861 -- # break 00:08:44.669 20:15:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:44.669 20:15:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:44.669 20:15:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.669 1+0 records in 00:08:44.669 1+0 records out 00:08:44.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460041 s, 8.9 MB/s 00:08:44.669 20:15:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.669 20:15:59 -- common/autotest_common.sh@874 -- # size=4096 00:08:44.669 20:15:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.669 20:15:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:44.669 20:15:59 -- common/autotest_common.sh@877 -- # return 0 00:08:44.669 20:15:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.669 20:15:59 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.669 20:15:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:44.669 20:15:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:44.669 20:15:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:44.928 20:15:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:08:44.928 20:15:59 -- common/autotest_common.sh@857 -- # local i 00:08:44.928 20:15:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:44.928 20:15:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:44.928 20:15:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:08:44.928 20:15:59 -- common/autotest_common.sh@861 -- # break 00:08:44.928 20:15:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:44.928 20:15:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:44.928 20:15:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.928 1+0 records in 00:08:44.928 1+0 records out 00:08:44.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483187 s, 8.5 MB/s 00:08:44.928 20:15:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.928 20:15:59 -- common/autotest_common.sh@874 -- # size=4096 00:08:44.928 20:15:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.928 20:15:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:44.928 20:15:59 -- common/autotest_common.sh@877 -- # return 0 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd0", 00:08:44.928 "bdev_name": "Nvme0n1p1" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd1", 00:08:44.928 "bdev_name": "Nvme0n1p2" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd2", 00:08:44.928 "bdev_name": "Nvme1n1" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd3", 00:08:44.928 "bdev_name": "Nvme2n1" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd4", 00:08:44.928 "bdev_name": "Nvme2n2" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd5", 00:08:44.928 "bdev_name": "Nvme2n3" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd6", 00:08:44.928 "bdev_name": "Nvme3n1" 00:08:44.928 } 00:08:44.928 ]' 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd0", 00:08:44.928 "bdev_name": "Nvme0n1p1" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd1", 00:08:44.928 "bdev_name": "Nvme0n1p2" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd2", 00:08:44.928 "bdev_name": "Nvme1n1" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd3", 00:08:44.928 "bdev_name": "Nvme2n1" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd4", 00:08:44.928 "bdev_name": "Nvme2n2" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd5", 00:08:44.928 "bdev_name": "Nvme2n3" 00:08:44.928 }, 00:08:44.928 { 00:08:44.928 "nbd_device": "/dev/nbd6", 00:08:44.928 "bdev_name": "Nvme3n1" 00:08:44.928 } 00:08:44.928 ]' 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@51 -- # local i 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.928 20:15:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:45.186 20:16:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:45.186 20:16:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:45.186 20:16:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:45.186 20:16:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.186 20:16:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.186 20:16:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:45.186 20:16:00 -- bdev/nbd_common.sh@41 -- # break 00:08:45.186 20:16:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.186 20:16:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.186 20:16:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:45.461 20:16:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:45.461 20:16:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:45.461 20:16:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:45.461 20:16:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.461 20:16:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.461 20:16:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:45.461 20:16:00 -- bdev/nbd_common.sh@41 -- # break 00:08:45.461 20:16:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.461 20:16:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.461 20:16:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@41 -- # break 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@41 -- # break 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.732 20:16:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:45.990 20:16:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:45.990 20:16:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:45.990 20:16:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:45.990 20:16:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.990 20:16:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.990 20:16:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:45.990 20:16:00 -- bdev/nbd_common.sh@41 -- # break 00:08:45.990 20:16:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.990 20:16:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.990 20:16:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:46.247 20:16:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:46.247 20:16:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:46.247 20:16:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:46.247 20:16:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.247 20:16:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.247 20:16:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:46.247 20:16:01 -- bdev/nbd_common.sh@41 -- # break 00:08:46.247 20:16:01 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.247 20:16:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.247 20:16:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@41 -- # break 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:46.505 20:16:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@65 -- # true 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@122 -- # count=0 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@127 -- # return 0 00:08:46.762 20:16:01 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@12 -- # local i 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:46.762 /dev/nbd0 00:08:46.762 20:16:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:46.763 20:16:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:46.763 20:16:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:46.763 20:16:01 -- common/autotest_common.sh@857 -- # local i 00:08:46.763 20:16:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:46.763 20:16:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:46.763 20:16:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:46.763 20:16:01 -- common/autotest_common.sh@861 -- # break 00:08:46.763 20:16:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:46.763 20:16:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:46.763 20:16:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.763 1+0 records in 00:08:46.763 1+0 records out 00:08:46.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239018 s, 17.1 MB/s 00:08:46.763 20:16:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.763 20:16:01 -- common/autotest_common.sh@874 -- # size=4096 00:08:46.763 20:16:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.763 20:16:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:46.763 20:16:01 -- common/autotest_common.sh@877 -- # return 0 00:08:46.763 20:16:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.763 20:16:01 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.763 20:16:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:47.019 /dev/nbd1 00:08:47.019 20:16:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:47.019 20:16:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:47.019 20:16:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:47.019 20:16:01 -- common/autotest_common.sh@857 -- # local i 00:08:47.019 20:16:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:47.019 20:16:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:47.019 20:16:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:47.019 20:16:01 -- common/autotest_common.sh@861 -- # break 00:08:47.019 20:16:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:47.019 20:16:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:47.019 20:16:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.019 1+0 records in 00:08:47.019 1+0 records out 00:08:47.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275111 s, 14.9 MB/s 00:08:47.019 20:16:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.019 20:16:01 -- common/autotest_common.sh@874 -- # size=4096 00:08:47.019 20:16:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.019 20:16:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:47.019 20:16:01 -- common/autotest_common.sh@877 -- # return 0 00:08:47.019 20:16:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.019 20:16:01 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.019 20:16:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:47.276 /dev/nbd10 00:08:47.276 20:16:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:47.276 20:16:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:47.276 20:16:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:08:47.276 20:16:02 -- common/autotest_common.sh@857 -- # local i 00:08:47.276 20:16:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:47.276 20:16:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:47.276 20:16:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:08:47.276 20:16:02 -- common/autotest_common.sh@861 -- # break 00:08:47.276 20:16:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:47.276 20:16:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:47.276 20:16:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.276 1+0 records in 00:08:47.276 1+0 records out 00:08:47.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561301 s, 7.3 MB/s 00:08:47.276 20:16:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.276 20:16:02 -- common/autotest_common.sh@874 -- # size=4096 00:08:47.276 20:16:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.276 20:16:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:47.276 20:16:02 -- common/autotest_common.sh@877 -- # return 0 00:08:47.276 20:16:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.276 20:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.276 20:16:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:47.533 /dev/nbd11 00:08:47.533 20:16:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:47.533 20:16:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:47.533 20:16:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:08:47.533 20:16:02 -- common/autotest_common.sh@857 -- # local i 00:08:47.533 20:16:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:47.533 20:16:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:47.533 20:16:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:08:47.533 20:16:02 -- common/autotest_common.sh@861 -- # break 00:08:47.533 20:16:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:47.533 20:16:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:47.533 20:16:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.533 1+0 records in 00:08:47.533 1+0 records out 00:08:47.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367607 s, 11.1 MB/s 00:08:47.533 20:16:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.533 20:16:02 -- common/autotest_common.sh@874 -- # size=4096 00:08:47.534 20:16:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.534 20:16:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:47.534 20:16:02 -- common/autotest_common.sh@877 -- # return 0 00:08:47.534 20:16:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.534 20:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.534 20:16:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:47.791 /dev/nbd12 00:08:47.791 20:16:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:47.791 20:16:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:47.791 20:16:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:08:47.791 20:16:02 -- common/autotest_common.sh@857 -- # local i 00:08:47.791 20:16:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:47.791 20:16:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:47.791 20:16:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:08:47.791 20:16:02 -- common/autotest_common.sh@861 -- # break 00:08:47.791 20:16:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:47.791 20:16:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:47.791 20:16:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.791 1+0 records in 00:08:47.791 1+0 records out 00:08:47.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262733 s, 15.6 MB/s 00:08:47.791 20:16:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.791 20:16:02 -- common/autotest_common.sh@874 -- # size=4096 00:08:47.791 20:16:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.791 20:16:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:47.791 20:16:02 -- common/autotest_common.sh@877 -- # return 0 00:08:47.791 20:16:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.791 20:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.791 20:16:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:47.791 /dev/nbd13 00:08:48.049 20:16:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:48.049 20:16:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:48.049 20:16:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:08:48.049 20:16:02 -- common/autotest_common.sh@857 -- # local i 00:08:48.049 20:16:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:48.049 20:16:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:48.049 20:16:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:08:48.049 20:16:02 -- common/autotest_common.sh@861 -- # break 00:08:48.049 20:16:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:48.049 20:16:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:48.049 20:16:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.049 1+0 records in 00:08:48.049 1+0 records out 00:08:48.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454936 s, 9.0 MB/s 00:08:48.049 20:16:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.049 20:16:02 -- common/autotest_common.sh@874 -- # size=4096 00:08:48.049 20:16:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.049 20:16:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:48.049 20:16:02 -- common/autotest_common.sh@877 -- # return 0 00:08:48.049 20:16:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.049 20:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.049 20:16:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:48.049 /dev/nbd14 00:08:48.049 20:16:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:48.049 20:16:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:48.049 20:16:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:08:48.049 20:16:02 -- common/autotest_common.sh@857 -- # local i 00:08:48.049 20:16:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:48.049 20:16:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:48.049 20:16:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:08:48.049 20:16:02 -- common/autotest_common.sh@861 -- # break 00:08:48.049 20:16:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:48.049 20:16:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:48.049 20:16:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.049 1+0 records in 00:08:48.049 1+0 records out 00:08:48.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604621 s, 6.8 MB/s 00:08:48.050 20:16:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.050 20:16:02 -- common/autotest_common.sh@874 -- # size=4096 00:08:48.050 20:16:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.050 20:16:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:48.050 20:16:02 -- common/autotest_common.sh@877 -- # return 0 00:08:48.050 20:16:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.050 20:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.050 20:16:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.050 20:16:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.050 20:16:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.308 20:16:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd0", 00:08:48.308 "bdev_name": "Nvme0n1p1" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd1", 00:08:48.308 "bdev_name": "Nvme0n1p2" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd10", 00:08:48.308 "bdev_name": "Nvme1n1" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd11", 00:08:48.308 "bdev_name": "Nvme2n1" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd12", 00:08:48.308 "bdev_name": "Nvme2n2" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd13", 00:08:48.308 "bdev_name": "Nvme2n3" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd14", 00:08:48.308 "bdev_name": "Nvme3n1" 00:08:48.308 } 00:08:48.308 ]' 00:08:48.308 20:16:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd0", 00:08:48.308 "bdev_name": "Nvme0n1p1" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd1", 00:08:48.308 "bdev_name": "Nvme0n1p2" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd10", 00:08:48.308 "bdev_name": "Nvme1n1" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd11", 00:08:48.308 "bdev_name": "Nvme2n1" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd12", 00:08:48.308 "bdev_name": "Nvme2n2" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd13", 00:08:48.308 "bdev_name": "Nvme2n3" 00:08:48.308 }, 00:08:48.308 { 00:08:48.308 "nbd_device": "/dev/nbd14", 00:08:48.308 "bdev_name": "Nvme3n1" 00:08:48.308 } 00:08:48.308 ]' 00:08:48.308 20:16:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.308 20:16:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:48.308 /dev/nbd1 00:08:48.309 /dev/nbd10 00:08:48.309 /dev/nbd11 00:08:48.309 /dev/nbd12 00:08:48.309 /dev/nbd13 00:08:48.309 /dev/nbd14' 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:48.309 /dev/nbd1 00:08:48.309 /dev/nbd10 00:08:48.309 /dev/nbd11 00:08:48.309 /dev/nbd12 00:08:48.309 /dev/nbd13 00:08:48.309 /dev/nbd14' 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@65 -- # count=7 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@95 -- # count=7 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:48.309 256+0 records in 00:08:48.309 256+0 records out 00:08:48.309 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0058643 s, 179 MB/s 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.309 20:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:48.567 256+0 records in 00:08:48.567 256+0 records out 00:08:48.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0603596 s, 17.4 MB/s 00:08:48.567 20:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.567 20:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:48.567 256+0 records in 00:08:48.567 256+0 records out 00:08:48.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0627889 s, 16.7 MB/s 00:08:48.567 20:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.567 20:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:48.567 256+0 records in 00:08:48.567 256+0 records out 00:08:48.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0639231 s, 16.4 MB/s 00:08:48.567 20:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.567 20:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:48.567 256+0 records in 00:08:48.567 256+0 records out 00:08:48.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0596257 s, 17.6 MB/s 00:08:48.567 20:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.567 20:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:48.826 256+0 records in 00:08:48.826 256+0 records out 00:08:48.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0601307 s, 17.4 MB/s 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:48.826 256+0 records in 00:08:48.826 256+0 records out 00:08:48.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0609393 s, 17.2 MB/s 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:48.826 256+0 records in 00:08:48.826 256+0 records out 00:08:48.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.05894 s, 17.8 MB/s 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@51 -- # local i 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.826 20:16:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:49.084 20:16:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:49.084 20:16:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:49.084 20:16:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:49.084 20:16:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.084 20:16:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.084 20:16:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:49.084 20:16:03 -- bdev/nbd_common.sh@41 -- # break 00:08:49.084 20:16:03 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.084 20:16:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.084 20:16:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:49.343 20:16:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:49.343 20:16:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:49.343 20:16:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:49.343 20:16:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.343 20:16:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.343 20:16:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:49.343 20:16:04 -- bdev/nbd_common.sh@41 -- # break 00:08:49.343 20:16:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.343 20:16:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.343 20:16:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@41 -- # break 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@41 -- # break 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.601 20:16:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:49.859 20:16:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:49.859 20:16:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:49.859 20:16:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:49.859 20:16:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.859 20:16:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.859 20:16:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:49.859 20:16:04 -- bdev/nbd_common.sh@41 -- # break 00:08:49.859 20:16:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.859 20:16:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.859 20:16:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:50.116 20:16:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:50.116 20:16:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:50.116 20:16:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:50.116 20:16:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.116 20:16:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.116 20:16:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:50.116 20:16:04 -- bdev/nbd_common.sh@41 -- # break 00:08:50.116 20:16:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.116 20:16:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.116 20:16:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@41 -- # break 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@65 -- # true 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@65 -- # count=0 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@104 -- # count=0 00:08:50.374 20:16:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:50.632 20:16:05 -- bdev/nbd_common.sh@109 -- # return 0 00:08:50.632 20:16:05 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:50.632 20:16:05 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.632 20:16:05 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:50.632 20:16:05 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:50.632 20:16:05 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:50.632 20:16:05 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:50.632 malloc_lvol_verify 00:08:50.632 20:16:05 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:50.890 11da06fd-1de8-4f7c-aed2-e05c12deb097 00:08:50.890 20:16:05 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:51.147 0ebc66f7-02ef-4704-858b-fdf02110c168 00:08:51.147 20:16:05 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:51.147 /dev/nbd0 00:08:51.147 20:16:06 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:51.405 mke2fs 1.47.0 (5-Feb-2023) 00:08:51.405 Discarding device blocks: 0/4096 done 00:08:51.405 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:51.405 00:08:51.405 Allocating group tables: 0/1 done 00:08:51.405 Writing inode tables: 0/1 done 00:08:51.405 Creating journal (1024 blocks): done 00:08:51.405 Writing superblocks and filesystem accounting information: 0/1 done 00:08:51.405 00:08:51.405 20:16:06 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:51.405 20:16:06 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:51.405 20:16:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.405 20:16:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:51.405 20:16:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.405 20:16:06 -- bdev/nbd_common.sh@51 -- # local i 00:08:51.405 20:16:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.405 20:16:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:51.405 20:16:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:51.663 20:16:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:51.663 20:16:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:51.663 20:16:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.663 20:16:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.663 20:16:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:51.663 20:16:06 -- bdev/nbd_common.sh@41 -- # break 00:08:51.663 20:16:06 -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.663 20:16:06 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:51.663 20:16:06 -- bdev/nbd_common.sh@147 -- # return 0 00:08:51.663 20:16:06 -- bdev/blockdev.sh@324 -- # killprocess 62126 00:08:51.663 20:16:06 -- common/autotest_common.sh@926 -- # '[' -z 62126 ']' 00:08:51.663 20:16:06 -- common/autotest_common.sh@930 -- # kill -0 62126 00:08:51.663 20:16:06 -- common/autotest_common.sh@931 -- # uname 00:08:51.663 20:16:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:51.663 20:16:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62126 00:08:51.663 20:16:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:51.663 20:16:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:51.663 20:16:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62126' 00:08:51.663 killing process with pid 62126 00:08:51.663 20:16:06 -- common/autotest_common.sh@945 -- # kill 62126 00:08:51.663 20:16:06 -- common/autotest_common.sh@950 -- # wait 62126 00:08:52.595 20:16:07 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:52.595 00:08:52.595 real 0m10.083s 00:08:52.595 user 0m14.395s 00:08:52.595 sys 0m3.237s 00:08:52.595 ************************************ 00:08:52.595 20:16:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.595 20:16:07 -- common/autotest_common.sh@10 -- # set +x 00:08:52.595 END TEST bdev_nbd 00:08:52.595 ************************************ 00:08:52.595 skipping fio tests on NVMe due to multi-ns failures. 00:08:52.595 20:16:07 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:52.595 20:16:07 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:52.595 20:16:07 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:52.595 20:16:07 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:52.595 20:16:07 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:52.595 20:16:07 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:52.595 20:16:07 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:08:52.595 20:16:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.595 20:16:07 -- common/autotest_common.sh@10 -- # set +x 00:08:52.595 ************************************ 00:08:52.595 START TEST bdev_verify 00:08:52.595 ************************************ 00:08:52.595 20:16:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:52.595 [2024-10-16 20:16:07.386727] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:52.595 [2024-10-16 20:16:07.386843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62533 ] 00:08:52.853 [2024-10-16 20:16:07.534395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:52.853 [2024-10-16 20:16:07.711336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.853 [2024-10-16 20:16:07.711413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.419 Running I/O for 5 seconds... 00:08:58.729 00:08:58.729 Latency(us) 00:08:58.729 [2024-10-16T20:16:13.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x0 length 0x5e800 00:08:58.729 Nvme0n1p1 : 5.05 2521.22 9.85 0.00 0.00 50600.01 10485.76 61301.37 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x5e800 length 0x5e800 00:08:58.729 Nvme0n1p1 : 5.05 2527.92 9.87 0.00 0.00 50481.75 8368.44 58074.98 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x0 length 0x5e7ff 00:08:58.729 Nvme0n1p2 : 5.05 2520.47 9.85 0.00 0.00 50557.46 11141.12 57671.68 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:58.729 Nvme0n1p2 : 5.05 2527.08 9.87 0.00 0.00 50441.52 8822.15 54041.99 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x0 length 0xa0000 00:08:58.729 Nvme1n1 : 5.05 2526.10 9.87 0.00 0.00 50363.14 2886.10 55251.89 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0xa0000 length 0xa0000 00:08:58.729 Nvme1n1 : 5.05 2526.23 9.87 0.00 0.00 50374.58 9427.10 53235.40 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x0 length 0x80000 00:08:58.729 Nvme2n1 : 5.06 2524.69 9.86 0.00 0.00 50304.77 5091.64 54848.59 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x80000 length 0x80000 00:08:58.729 Nvme2n1 : 5.06 2524.82 9.86 0.00 0.00 50317.20 10334.52 54848.59 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x0 length 0x80000 00:08:58.729 Nvme2n2 : 5.06 2530.69 9.89 0.00 0.00 50157.26 2671.85 52025.50 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x80000 length 0x80000 00:08:58.729 Nvme2n2 : 5.06 2530.99 9.89 0.00 0.00 50194.33 2356.78 55655.19 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x0 length 0x80000 00:08:58.729 Nvme2n3 : 5.07 2529.37 9.88 0.00 0.00 50132.40 4032.98 52025.50 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x80000 length 0x80000 00:08:58.729 Nvme2n3 : 5.06 2530.03 9.88 0.00 0.00 50166.83 3402.83 56058.49 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x0 length 0x20000 00:08:58.729 Nvme3n1 : 5.07 2528.04 9.88 0.00 0.00 50112.74 5620.97 51622.20 00:08:58.729 [2024-10-16T20:16:13.658Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.729 Verification LBA range: start 0x20000 length 0x20000 00:08:58.729 Nvme3n1 : 5.07 2528.72 9.88 0.00 0.00 50142.43 5242.88 56461.78 00:08:58.729 [2024-10-16T20:16:13.658Z] =================================================================================================================== 00:08:58.729 [2024-10-16T20:16:13.658Z] Total : 35376.34 138.19 0.00 0.00 50310.09 2356.78 61301.37 00:09:06.876 00:09:06.876 real 0m13.279s 00:09:06.876 user 0m25.347s 00:09:06.876 sys 0m0.292s 00:09:06.876 20:16:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.876 20:16:20 -- common/autotest_common.sh@10 -- # set +x 00:09:06.876 ************************************ 00:09:06.876 END TEST bdev_verify 00:09:06.876 ************************************ 00:09:06.876 20:16:20 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:06.876 20:16:20 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:06.876 20:16:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.876 20:16:20 -- common/autotest_common.sh@10 -- # set +x 00:09:06.876 ************************************ 00:09:06.876 START TEST bdev_verify_big_io 00:09:06.876 ************************************ 00:09:06.876 20:16:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:06.876 [2024-10-16 20:16:20.748545] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:06.876 [2024-10-16 20:16:20.748684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62698 ] 00:09:06.876 [2024-10-16 20:16:20.901828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:06.876 [2024-10-16 20:16:21.130079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.876 [2024-10-16 20:16:21.130120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.137 Running I/O for 5 seconds... 00:09:12.476 00:09:12.476 Latency(us) 00:09:12.476 [2024-10-16T20:16:27.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x0 length 0x5e80 00:09:12.476 Nvme0n1p1 : 5.36 259.71 16.23 0.00 0.00 483264.63 55251.89 774333.05 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x5e80 length 0x5e80 00:09:12.476 Nvme0n1p1 : 5.45 237.83 14.86 0.00 0.00 503936.74 17745.13 525901.19 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x0 length 0x5e7f 00:09:12.476 Nvme0n1p2 : 5.36 259.64 16.23 0.00 0.00 476548.31 55251.89 706578.90 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:12.476 Nvme0n1p2 : 5.46 246.88 15.43 0.00 0.00 481004.47 7864.32 603334.50 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x0 length 0xa000 00:09:12.476 Nvme1n1 : 5.39 265.39 16.59 0.00 0.00 461646.02 28835.84 638824.76 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0xa000 length 0xa000 00:09:12.476 Nvme1n1 : 5.47 254.11 15.88 0.00 0.00 461414.86 6805.66 625919.21 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x0 length 0x8000 00:09:12.476 Nvme2n1 : 5.39 265.32 16.58 0.00 0.00 455134.86 29037.49 580749.78 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x8000 length 0x8000 00:09:12.476 Nvme2n1 : 5.38 225.71 14.11 0.00 0.00 555805.41 70980.53 735616.39 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x0 length 0x8000 00:09:12.476 Nvme2n2 : 5.43 272.16 17.01 0.00 0.00 438065.01 38918.30 519448.42 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x8000 length 0x8000 00:09:12.476 Nvme2n2 : 5.38 225.63 14.10 0.00 0.00 548920.10 71787.13 683994.19 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x0 length 0x8000 00:09:12.476 Nvme2n3 : 5.46 286.91 17.93 0.00 0.00 411555.56 17946.78 709805.29 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x8000 length 0x8000 00:09:12.476 Nvme2n3 : 5.43 230.68 14.42 0.00 0.00 530712.45 49202.41 632371.99 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x0 length 0x2000 00:09:12.476 Nvme3n1 : 5.47 303.05 18.94 0.00 0.00 384913.96 4789.17 864671.90 00:09:12.476 [2024-10-16T20:16:27.405Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.476 Verification LBA range: start 0x2000 length 0x2000 00:09:12.476 Nvme3n1 : 5.43 230.61 14.41 0.00 0.00 523798.28 49807.36 583976.17 00:09:12.476 [2024-10-16T20:16:27.405Z] =================================================================================================================== 00:09:12.476 [2024-10-16T20:16:27.405Z] Total : 3563.62 222.73 0.00 0.00 475424.97 4789.17 864671.90 00:09:15.022 ************************************ 00:09:15.022 END TEST bdev_verify_big_io 00:09:15.022 ************************************ 00:09:15.022 00:09:15.022 real 0m8.992s 00:09:15.022 user 0m16.719s 00:09:15.022 sys 0m0.317s 00:09:15.022 20:16:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.022 20:16:29 -- common/autotest_common.sh@10 -- # set +x 00:09:15.022 20:16:29 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:15.022 20:16:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:15.022 20:16:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:15.022 20:16:29 -- common/autotest_common.sh@10 -- # set +x 00:09:15.022 ************************************ 00:09:15.022 START TEST bdev_write_zeroes 00:09:15.022 ************************************ 00:09:15.022 20:16:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:15.022 [2024-10-16 20:16:29.789166] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:15.022 [2024-10-16 20:16:29.789282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62818 ] 00:09:15.022 [2024-10-16 20:16:29.936907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.282 [2024-10-16 20:16:30.145640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.852 Running I/O for 1 seconds... 00:09:18.396 00:09:18.396 Latency(us) 00:09:18.396 [2024-10-16T20:16:33.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.396 [2024-10-16T20:16:33.325Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.396 Nvme0n1p1 : 2.05 76.49 0.30 0.00 0.00 1410688.89 13006.38 2039077.02 00:09:18.396 [2024-10-16T20:16:33.325Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.396 Nvme0n1p2 : 1.99 64.17 0.25 0.00 0.00 1788048.94 1542213.32 2000360.37 00:09:18.396 [2024-10-16T20:16:33.325Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.396 Nvme1n1 : 1.82 140.89 0.55 0.00 0.00 905481.85 10082.46 1806777.11 00:09:18.396 [2024-10-16T20:16:33.325Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.396 Nvme2n1 : 1.55 82.38 0.32 0.00 0.00 1545439.70 1542213.32 1548666.09 00:09:18.396 [2024-10-16T20:16:33.325Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.396 Nvme2n2 : 1.81 70.64 0.28 0.00 0.00 1800324.33 1793871.56 1806777.11 00:09:18.396 [2024-10-16T20:16:33.325Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.396 Nvme2n3 : 1.81 70.57 0.28 0.00 0.00 1800324.33 1793871.56 1806777.11 00:09:18.396 [2024-10-16T20:16:33.325Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.396 Nvme3n1 : 1.82 70.50 0.28 0.00 0.00 1800324.33 1793871.56 1806777.11 00:09:18.396 [2024-10-16T20:16:33.325Z] =================================================================================================================== 00:09:18.396 [2024-10-16T20:16:33.325Z] Total : 575.64 2.25 0.00 0.00 1492205.70 10082.46 2039077.02 00:09:20.305 00:09:20.305 real 0m5.195s 00:09:20.305 user 0m4.805s 00:09:20.305 sys 0m0.266s 00:09:20.305 ************************************ 00:09:20.305 END TEST bdev_write_zeroes 00:09:20.305 ************************************ 00:09:20.305 20:16:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.305 20:16:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.305 20:16:34 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:20.305 20:16:34 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:20.305 20:16:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:20.305 20:16:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.305 ************************************ 00:09:20.305 START TEST bdev_json_nonenclosed 00:09:20.305 ************************************ 00:09:20.305 20:16:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:20.305 [2024-10-16 20:16:35.051987] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:20.305 [2024-10-16 20:16:35.052155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62898 ] 00:09:20.305 [2024-10-16 20:16:35.208486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.565 [2024-10-16 20:16:35.453914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.565 [2024-10-16 20:16:35.454157] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:20.565 [2024-10-16 20:16:35.454180] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:21.137 00:09:21.137 real 0m0.791s 00:09:21.137 user 0m0.555s 00:09:21.137 sys 0m0.127s 00:09:21.137 ************************************ 00:09:21.137 END TEST bdev_json_nonenclosed 00:09:21.137 ************************************ 00:09:21.137 20:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.137 20:16:35 -- common/autotest_common.sh@10 -- # set +x 00:09:21.137 20:16:35 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.137 20:16:35 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:21.137 20:16:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.137 20:16:35 -- common/autotest_common.sh@10 -- # set +x 00:09:21.137 ************************************ 00:09:21.137 START TEST bdev_json_nonarray 00:09:21.137 ************************************ 00:09:21.137 20:16:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.137 [2024-10-16 20:16:35.915099] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:21.137 [2024-10-16 20:16:35.915242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62919 ] 00:09:21.405 [2024-10-16 20:16:36.067830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.405 [2024-10-16 20:16:36.306432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.405 [2024-10-16 20:16:36.306879] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:21.405 [2024-10-16 20:16:36.306917] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:21.979 00:09:21.979 real 0m0.781s 00:09:21.979 user 0m0.538s 00:09:21.979 sys 0m0.134s 00:09:21.979 ************************************ 00:09:21.979 END TEST bdev_json_nonarray 00:09:21.979 ************************************ 00:09:21.979 20:16:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.979 20:16:36 -- common/autotest_common.sh@10 -- # set +x 00:09:21.979 20:16:36 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:21.979 20:16:36 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:21.979 20:16:36 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:21.979 20:16:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:21.979 20:16:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.979 20:16:36 -- common/autotest_common.sh@10 -- # set +x 00:09:21.979 ************************************ 00:09:21.979 START TEST bdev_gpt_uuid 00:09:21.979 ************************************ 00:09:21.979 20:16:36 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:09:21.979 20:16:36 -- bdev/blockdev.sh@612 -- # local bdev 00:09:21.979 20:16:36 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:21.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.979 20:16:36 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62950 00:09:21.979 20:16:36 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:21.979 20:16:36 -- bdev/blockdev.sh@47 -- # waitforlisten 62950 00:09:21.979 20:16:36 -- common/autotest_common.sh@819 -- # '[' -z 62950 ']' 00:09:21.979 20:16:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.979 20:16:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:21.979 20:16:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.979 20:16:36 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:21.979 20:16:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:21.979 20:16:36 -- common/autotest_common.sh@10 -- # set +x 00:09:21.979 [2024-10-16 20:16:36.771878] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:21.979 [2024-10-16 20:16:36.772241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62950 ] 00:09:22.241 [2024-10-16 20:16:36.924611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.506 [2024-10-16 20:16:37.174657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:22.506 [2024-10-16 20:16:37.174890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.450 20:16:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:23.450 20:16:38 -- common/autotest_common.sh@852 -- # return 0 00:09:23.450 20:16:38 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:23.450 20:16:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:23.450 20:16:38 -- common/autotest_common.sh@10 -- # set +x 00:09:23.712 Some configs were skipped because the RPC state that can call them passed over. 00:09:23.712 20:16:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:23.712 20:16:38 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:23.712 20:16:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:23.712 20:16:38 -- common/autotest_common.sh@10 -- # set +x 00:09:23.712 20:16:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:23.712 20:16:38 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:23.712 20:16:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:23.712 20:16:38 -- common/autotest_common.sh@10 -- # set +x 00:09:23.974 20:16:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:23.974 20:16:38 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:23.974 { 00:09:23.974 "name": "Nvme0n1p1", 00:09:23.974 "aliases": [ 00:09:23.974 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:23.974 ], 00:09:23.974 "product_name": "GPT Disk", 00:09:23.974 "block_size": 4096, 00:09:23.974 "num_blocks": 774144, 00:09:23.974 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:23.974 "md_size": 64, 00:09:23.974 "md_interleave": false, 00:09:23.974 "dif_type": 0, 00:09:23.974 "assigned_rate_limits": { 00:09:23.974 "rw_ios_per_sec": 0, 00:09:23.974 "rw_mbytes_per_sec": 0, 00:09:23.974 "r_mbytes_per_sec": 0, 00:09:23.974 "w_mbytes_per_sec": 0 00:09:23.974 }, 00:09:23.974 "claimed": false, 00:09:23.974 "zoned": false, 00:09:23.974 "supported_io_types": { 00:09:23.974 "read": true, 00:09:23.974 "write": true, 00:09:23.974 "unmap": true, 00:09:23.974 "write_zeroes": true, 00:09:23.974 "flush": true, 00:09:23.974 "reset": true, 00:09:23.974 "compare": true, 00:09:23.974 "compare_and_write": false, 00:09:23.974 "abort": true, 00:09:23.974 "nvme_admin": false, 00:09:23.974 "nvme_io": false 00:09:23.974 }, 00:09:23.974 "driver_specific": { 00:09:23.974 "gpt": { 00:09:23.974 "base_bdev": "Nvme0n1", 00:09:23.974 "offset_blocks": 256, 00:09:23.974 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:23.974 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:23.974 "partition_name": "SPDK_TEST_first" 00:09:23.974 } 00:09:23.974 } 00:09:23.974 } 00:09:23.974 ]' 00:09:23.974 20:16:38 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:23.974 20:16:38 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:23.974 20:16:38 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:23.974 20:16:38 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:23.974 20:16:38 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:23.974 20:16:38 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:23.975 20:16:38 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:23.975 20:16:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:23.975 20:16:38 -- common/autotest_common.sh@10 -- # set +x 00:09:23.975 20:16:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:23.975 20:16:38 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:23.975 { 00:09:23.975 "name": "Nvme0n1p2", 00:09:23.975 "aliases": [ 00:09:23.975 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:23.975 ], 00:09:23.975 "product_name": "GPT Disk", 00:09:23.975 "block_size": 4096, 00:09:23.975 "num_blocks": 774143, 00:09:23.975 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:23.975 "md_size": 64, 00:09:23.975 "md_interleave": false, 00:09:23.975 "dif_type": 0, 00:09:23.975 "assigned_rate_limits": { 00:09:23.975 "rw_ios_per_sec": 0, 00:09:23.975 "rw_mbytes_per_sec": 0, 00:09:23.975 "r_mbytes_per_sec": 0, 00:09:23.975 "w_mbytes_per_sec": 0 00:09:23.975 }, 00:09:23.975 "claimed": false, 00:09:23.975 "zoned": false, 00:09:23.975 "supported_io_types": { 00:09:23.975 "read": true, 00:09:23.975 "write": true, 00:09:23.975 "unmap": true, 00:09:23.975 "write_zeroes": true, 00:09:23.975 "flush": true, 00:09:23.975 "reset": true, 00:09:23.975 "compare": true, 00:09:23.975 "compare_and_write": false, 00:09:23.975 "abort": true, 00:09:23.975 "nvme_admin": false, 00:09:23.975 "nvme_io": false 00:09:23.975 }, 00:09:23.975 "driver_specific": { 00:09:23.975 "gpt": { 00:09:23.975 "base_bdev": "Nvme0n1", 00:09:23.975 "offset_blocks": 774400, 00:09:23.975 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:23.975 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:23.975 "partition_name": "SPDK_TEST_second" 00:09:23.975 } 00:09:23.975 } 00:09:23.975 } 00:09:23.975 ]' 00:09:23.975 20:16:38 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:23.975 20:16:38 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:23.975 20:16:38 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:23.975 20:16:38 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:23.975 20:16:38 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:23.975 20:16:38 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:23.975 20:16:38 -- bdev/blockdev.sh@629 -- # killprocess 62950 00:09:23.975 20:16:38 -- common/autotest_common.sh@926 -- # '[' -z 62950 ']' 00:09:23.975 20:16:38 -- common/autotest_common.sh@930 -- # kill -0 62950 00:09:23.975 20:16:38 -- common/autotest_common.sh@931 -- # uname 00:09:23.975 20:16:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:23.975 20:16:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62950 00:09:23.975 killing process with pid 62950 00:09:23.975 20:16:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:23.975 20:16:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:23.975 20:16:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62950' 00:09:23.975 20:16:38 -- common/autotest_common.sh@945 -- # kill 62950 00:09:23.975 20:16:38 -- common/autotest_common.sh@950 -- # wait 62950 00:09:25.361 ************************************ 00:09:25.362 END TEST bdev_gpt_uuid 00:09:25.362 ************************************ 00:09:25.362 00:09:25.362 real 0m3.589s 00:09:25.362 user 0m3.752s 00:09:25.362 sys 0m0.520s 00:09:25.362 20:16:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.362 20:16:40 -- common/autotest_common.sh@10 -- # set +x 00:09:25.623 20:16:40 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:25.623 20:16:40 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:25.623 20:16:40 -- bdev/blockdev.sh@809 -- # cleanup 00:09:25.623 20:16:40 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:25.623 20:16:40 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:25.623 20:16:40 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:25.623 20:16:40 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:25.623 20:16:40 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:25.623 20:16:40 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:25.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.884 Waiting for block devices as requested 00:09:26.145 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.145 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.145 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.145 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.442 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:31.442 20:16:46 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:31.442 20:16:46 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:31.704 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:31.704 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:31.704 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:31.704 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:31.704 20:16:46 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:31.704 00:09:31.704 real 1m5.202s 00:09:31.704 user 1m25.812s 00:09:31.704 sys 0m7.778s 00:09:31.704 ************************************ 00:09:31.704 END TEST blockdev_nvme_gpt 00:09:31.704 ************************************ 00:09:31.704 20:16:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.704 20:16:46 -- common/autotest_common.sh@10 -- # set +x 00:09:31.704 20:16:46 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:31.704 20:16:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:31.704 20:16:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:31.704 20:16:46 -- common/autotest_common.sh@10 -- # set +x 00:09:31.704 ************************************ 00:09:31.704 START TEST nvme 00:09:31.704 ************************************ 00:09:31.704 20:16:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:31.704 * Looking for test storage... 00:09:31.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:31.704 20:16:46 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:32.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.909 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.909 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.909 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.909 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.909 20:16:47 -- nvme/nvme.sh@79 -- # uname 00:09:32.909 20:16:47 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:32.909 20:16:47 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:32.909 20:16:47 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:32.909 20:16:47 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:32.909 20:16:47 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:09:32.909 20:16:47 -- common/autotest_common.sh@1045 -- # echo 0 00:09:32.909 Waiting for stub to ready for secondary processes... 00:09:32.909 20:16:47 -- common/autotest_common.sh@1047 -- # stubpid=63615 00:09:32.909 20:16:47 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:09:32.909 20:16:47 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:32.909 20:16:47 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:32.909 20:16:47 -- common/autotest_common.sh@1051 -- # [[ -e /proc/63615 ]] 00:09:32.909 20:16:47 -- common/autotest_common.sh@1052 -- # sleep 1s 00:09:32.909 [2024-10-16 20:16:47.786910] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:32.909 [2024-10-16 20:16:47.787186] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.852 20:16:48 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:33.852 20:16:48 -- common/autotest_common.sh@1051 -- # [[ -e /proc/63615 ]] 00:09:33.852 20:16:48 -- common/autotest_common.sh@1052 -- # sleep 1s 00:09:34.114 [2024-10-16 20:16:48.829625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:34.114 [2024-10-16 20:16:49.021496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.114 [2024-10-16 20:16:49.021828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.114 [2024-10-16 20:16:49.022004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.114 [2024-10-16 20:16:49.042480] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:34.375 [2024-10-16 20:16:49.055909] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:34.375 [2024-10-16 20:16:49.056086] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:34.375 [2024-10-16 20:16:49.064717] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:34.375 [2024-10-16 20:16:49.064952] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:34.375 [2024-10-16 20:16:49.065107] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:34.375 [2024-10-16 20:16:49.074973] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:34.375 [2024-10-16 20:16:49.075270] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:34.375 [2024-10-16 20:16:49.075382] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:34.375 [2024-10-16 20:16:49.082643] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:34.375 [2024-10-16 20:16:49.082916] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:34.375 [2024-10-16 20:16:49.083027] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:34.375 [2024-10-16 20:16:49.083124] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:34.375 [2024-10-16 20:16:49.083238] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:34.947 20:16:49 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:34.947 done. 00:09:34.947 20:16:49 -- common/autotest_common.sh@1054 -- # echo done. 00:09:34.947 20:16:49 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:34.947 20:16:49 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:34.947 20:16:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:34.947 20:16:49 -- common/autotest_common.sh@10 -- # set +x 00:09:34.947 ************************************ 00:09:34.947 START TEST nvme_reset 00:09:34.947 ************************************ 00:09:34.947 20:16:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:35.208 Initializing NVMe Controllers 00:09:35.208 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:35.208 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:35.208 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:35.208 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:35.208 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:35.208 00:09:35.208 real 0m0.222s 00:09:35.208 user 0m0.060s 00:09:35.208 sys 0m0.113s 00:09:35.208 20:16:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.208 20:16:49 -- common/autotest_common.sh@10 -- # set +x 00:09:35.208 ************************************ 00:09:35.208 END TEST nvme_reset 00:09:35.208 ************************************ 00:09:35.208 20:16:50 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:35.208 20:16:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:35.209 20:16:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.209 20:16:50 -- common/autotest_common.sh@10 -- # set +x 00:09:35.209 ************************************ 00:09:35.209 START TEST nvme_identify 00:09:35.209 ************************************ 00:09:35.209 20:16:50 -- common/autotest_common.sh@1104 -- # nvme_identify 00:09:35.209 20:16:50 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:35.209 20:16:50 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:35.209 20:16:50 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:35.209 20:16:50 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:35.209 20:16:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:35.209 20:16:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:35.209 20:16:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:35.209 20:16:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:35.209 20:16:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:35.209 20:16:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:35.209 20:16:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:35.209 20:16:50 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:35.472 ===================================================== 00:09:35.472 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:35.472 ===================================================== 00:09:35.472 Controller Capabilities/Features 00:09:35.472 ================================ 00:09:35.472 Vendor ID: 1b36 00:09:35.472 Subsystem Vendor ID: 1af4 00:09:35.472 Serial Number: 12341 00:09:35.472 Model Number: QEMU NVMe Ctrl 00:09:35.472 Firmware Version: 8.0.0 00:09:35.472 Recommended Arb Burst: 6 00:09:35.472 IEEE OUI Identifier: 00 54 52 00:09:35.472 Multi-path I/O 00:09:35.472 May have multiple subsystem ports: No 00:09:35.472 May have multiple controllers: No 00:09:35.472 Associated with SR-IOV VF: No 00:09:35.472 Max Data Transfer Size: 524288 00:09:35.472 Max Number of Namespaces: 256 00:09:35.472 Max Number of I/O Queues: 64 00:09:35.472 NVMe Specification Version (VS): 1.4 00:09:35.472 NVMe Specification Version (Identify): 1.4 00:09:35.472 Maximum Queue Entries: 2048 00:09:35.472 Contiguous Queues Required: Yes 00:09:35.472 Arbitration Mechanisms Supported 00:09:35.472 Weighted Round Robin: Not Supported 00:09:35.472 Vendor Specific: Not Supported 00:09:35.472 Reset Timeout: 7500 ms 00:09:35.472 Doorbell Stride: 4 bytes 00:09:35.472 NVM Subsystem Reset: Not Supported 00:09:35.472 Command Sets Supported 00:09:35.472 NVM Command Set: Supported 00:09:35.472 Boot Partition: Not Supported 00:09:35.472 Memory Page Size Minimum: 4096 bytes 00:09:35.472 Memory Page Size Maximum: 65536 bytes 00:09:35.472 Persistent Memory Region: Not Supported 00:09:35.472 Optional Asynchronous Events Supported 00:09:35.472 Namespace Attribute Notices: Supported 00:09:35.472 Firmware Activation Notices: Not Supported 00:09:35.472 ANA Change Notices: Not Supported 00:09:35.472 PLE Aggregate Log Change Notices: Not Supported 00:09:35.472 LBA Status Info Alert Notices: Not Supported 00:09:35.472 EGE Aggregate Log Change Notices: Not Supported 00:09:35.472 Normal NVM Subsystem Shutdown event: Not Supported 00:09:35.472 Zone Descriptor Change Notices: Not Supported 00:09:35.472 Discovery Log Change Notices: Not Supported 00:09:35.472 Controller Attributes 00:09:35.472 128-bit Host Identifier: Not Supported 00:09:35.472 Non-Operational Permissive Mode: Not Supported 00:09:35.472 NVM Sets: Not Supported 00:09:35.472 Read Recovery Levels: Not Supported 00:09:35.472 Endurance Groups: Not Supported 00:09:35.472 Predictable Latency Mode: Not Supported 00:09:35.472 Traffic Based Keep ALive: Not Supported 00:09:35.472 Namespace Granularity: Not Supported 00:09:35.472 SQ Associations: Not Supported 00:09:35.472 UUID List: Not Supported 00:09:35.472 Multi-Domain Subsystem: Not Supported 00:09:35.472 Fixed Capacity Management: Not Supported 00:09:35.472 Variable Capacity Management: Not Supported 00:09:35.472 Delete Endurance Group: Not Supported 00:09:35.472 Delete NVM Set: Not Supported 00:09:35.472 Extended LBA Formats Supported: Supported 00:09:35.472 Flexible Data Placement Supported: Not Supported 00:09:35.472 00:09:35.472 Controller Memory Buffer Support 00:09:35.472 ================================ 00:09:35.472 Supported: No 00:09:35.472 00:09:35.472 Persistent Memory Region Support 00:09:35.472 ================================ 00:09:35.472 Supported: No 00:09:35.472 00:09:35.472 Admin Command Set Attributes 00:09:35.472 ============================ 00:09:35.472 Security Send/Receive: Not Supported 00:09:35.472 Format NVM: Supported 00:09:35.472 Firmware Activate/Download: Not Supported 00:09:35.472 Namespace Management: Supported 00:09:35.472 Device Self-Test: Not Supported 00:09:35.472 Directives: Supported 00:09:35.472 NVMe-MI: Not Supported 00:09:35.472 Virtualization Management: Not Supported 00:09:35.472 Doorbell Buffer Config: Supported 00:09:35.472 Get LBA Status Capability: Not Supported 00:09:35.472 Command & Feature Lockdown Capability: Not Supported 00:09:35.472 Abort Command Limit: 4 00:09:35.472 Async Event Request Limit: 4 00:09:35.472 Number of Firmware Slots: N/A 00:09:35.472 Firmware Slot 1 Read-Only: N/A 00:09:35.472 Firmware Activation Without Reset: N/A 00:09:35.472 Multiple Update Detection Support: N/A 00:09:35.473 Firmware Update Granularity: No Information Provided 00:09:35.473 Per-Namespace SMART Log: Yes 00:09:35.473 Asymmetric Namespace Access Log Page: Not Supported 00:09:35.473 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:35.473 Command Effects Log Page: Supported 00:09:35.473 Get Log Page Extended Data: Supported 00:09:35.473 Telemetry Log Pages: Not Supported 00:09:35.473 Persistent Event Log Pages: Not Supported 00:09:35.473 Supported Log Pages Log Page: May Support 00:09:35.473 Commands Supported & Effects Log Page: Not Supported 00:09:35.473 Feature Identifiers & Effects Log Page:May Support 00:09:35.473 NVMe-MI Commands & Effects Log Page: May Support 00:09:35.473 Data Area 4 for Telemetry Log: Not Supported 00:09:35.473 Error Log Page Entries Supported: 1 00:09:35.473 Keep Alive: Not Supported 00:09:35.473 00:09:35.473 NVM Command Set Attributes 00:09:35.473 ========================== 00:09:35.473 Submission Queue Entry Size 00:09:35.473 Max: 64 00:09:35.473 Min: 64 00:09:35.473 Completion Queue Entry Size 00:09:35.473 Max: 16 00:09:35.473 Min: 16 00:09:35.473 Number of Namespaces: 256 00:09:35.473 Compare Command: Supported 00:09:35.473 Write Uncorrectable Command: Not Supported 00:09:35.473 Dataset Management Command: Supported 00:09:35.473 Write Zeroes Command: Supported 00:09:35.473 Set Features Save Field: Supported 00:09:35.473 Reservations: Not Supported 00:09:35.473 Timestamp: Supported 00:09:35.473 Copy: Supported 00:09:35.473 Volatile Write Cache: Present 00:09:35.473 Atomic Write Unit (Normal): 1 00:09:35.473 Atomic Write Unit (PFail): 1 00:09:35.473 Atomic Compare & Write Unit: 1 00:09:35.473 Fused Compare & Write: Not Supported 00:09:35.473 Scatter-Gather List 00:09:35.473 SGL Command Set: Supported 00:09:35.473 SGL Keyed: Not Supported 00:09:35.473 SGL Bit Bucket Descriptor: Not Supported 00:09:35.473 SGL Metadata Pointer: Not Supported 00:09:35.473 Oversized SGL: Not Supported 00:09:35.473 SGL Metadata Address: Not Supported 00:09:35.473 SGL Offset: Not Supported 00:09:35.473 Transport SGL Data Block: Not Supported 00:09:35.473 Replay Protected Memory Block: Not Supported 00:09:35.473 00:09:35.473 Firmware Slot Information 00:09:35.473 ========================= 00:09:35.473 Active slot: 1 00:09:35.473 Slot 1 Firmware Revision: 1.0 00:09:35.473 00:09:35.473 00:09:35.473 Commands Supported and Effects 00:09:35.473 ============================== 00:09:35.473 Admin Commands 00:09:35.473 -------------- 00:09:35.473 Delete I/O Submission Queue (00h): Supported 00:09:35.473 Create I/O Submission Queue (01h): Supported 00:09:35.473 Get Log Page (02h): Supported 00:09:35.473 Delete I/O Completion Queue (04h): Supported 00:09:35.473 Create I/O Completion Queue (05h): Supported 00:09:35.473 Identify (06h): Supported 00:09:35.473 Abort (08h): Supported 00:09:35.473 Set Features (09h): Supported 00:09:35.473 Get Features (0Ah): Supported 00:09:35.473 Asynchronous Event Request (0Ch): Supported 00:09:35.473 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:35.473 Directive Send (19h): Supported 00:09:35.473 Directive Receive (1Ah): Supported 00:09:35.473 Virtualization Management (1Ch): Supported 00:09:35.473 Doorbell Buffer Config (7Ch): Supported 00:09:35.473 Format NVM (80h): Supported LBA-Change 00:09:35.473 I/O Commands 00:09:35.473 ------------ 00:09:35.473 Flush (00h): Supported LBA-Change 00:09:35.473 Write (01h): Supported LBA-Change 00:09:35.473 Read (02h): Supported 00:09:35.473 Compare (05h): Supported 00:09:35.473 Write Zeroes (08h): Supported LBA-Change 00:09:35.473 Dataset Management (09h): Supported LBA-Change 00:09:35.473 Unknown (0Ch): Supported 00:09:35.473 Unknown (12h): Supported 00:09:35.473 Copy (19h): Supported LBA-Change 00:09:35.473 Unknown (1Dh): Supported LBA-Change 00:09:35.473 00:09:35.473 Error Log 00:09:35.473 ========= 00:09:35.473 00:09:35.473 Arbitration 00:09:35.473 =========== 00:09:35.473 Arbitration Burst: no limit 00:09:35.473 00:09:35.473 Power Management 00:09:35.473 ================ 00:09:35.473 Number of Power States: 1 00:09:35.473 Current Power State: Power State #0 00:09:35.473 Power State #0: 00:09:35.473 Max Power: 25.00 W 00:09:35.473 Non-Operational State: Operational 00:09:35.473 Entry Latency: 16 microseconds 00:09:35.473 Exit Latency: 4 microseconds 00:09:35.473 Relative Read Throughput: 0 00:09:35.473 Relative Read Latency: 0 00:09:35.473 Relative Write Throughput: 0 00:09:35.473 Relative Write Latency: 0 00:09:35.473 Idle Power[2024-10-16 20:16:50.303450] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63654 terminated unexpected 00:09:35.473 [2024-10-16 20:16:50.304708] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63654 terminated unexpected 00:09:35.473 : Not Reported 00:09:35.473 Active Power: Not Reported 00:09:35.473 Non-Operational Permissive Mode: Not Supported 00:09:35.473 00:09:35.473 Health Information 00:09:35.473 ================== 00:09:35.473 Critical Warnings: 00:09:35.473 Available Spare Space: OK 00:09:35.473 Temperature: OK 00:09:35.473 Device Reliability: OK 00:09:35.473 Read Only: No 00:09:35.473 Volatile Memory Backup: OK 00:09:35.473 Current Temperature: 323 Kelvin (50 Celsius) 00:09:35.473 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:35.473 Available Spare: 0% 00:09:35.473 Available Spare Threshold: 0% 00:09:35.473 Life Percentage Used: 0% 00:09:35.473 Data Units Read: 1213 00:09:35.473 Data Units Written: 565 00:09:35.473 Host Read Commands: 61649 00:09:35.473 Host Write Commands: 30400 00:09:35.473 Controller Busy Time: 0 minutes 00:09:35.473 Power Cycles: 0 00:09:35.473 Power On Hours: 0 hours 00:09:35.473 Unsafe Shutdowns: 0 00:09:35.473 Unrecoverable Media Errors: 0 00:09:35.473 Lifetime Error Log Entries: 0 00:09:35.473 Warning Temperature Time: 0 minutes 00:09:35.473 Critical Temperature Time: 0 minutes 00:09:35.473 00:09:35.473 Number of Queues 00:09:35.473 ================ 00:09:35.473 Number of I/O Submission Queues: 64 00:09:35.473 Number of I/O Completion Queues: 64 00:09:35.473 00:09:35.473 ZNS Specific Controller Data 00:09:35.473 ============================ 00:09:35.473 Zone Append Size Limit: 0 00:09:35.473 00:09:35.473 00:09:35.473 Active Namespaces 00:09:35.473 ================= 00:09:35.473 Namespace ID:1 00:09:35.473 Error Recovery Timeout: Unlimited 00:09:35.473 Command Set Identifier: NVM (00h) 00:09:35.473 Deallocate: Supported 00:09:35.473 Deallocated/Unwritten Error: Supported 00:09:35.473 Deallocated Read Value: All 0x00 00:09:35.473 Deallocate in Write Zeroes: Not Supported 00:09:35.473 Deallocated Guard Field: 0xFFFF 00:09:35.473 Flush: Supported 00:09:35.473 Reservation: Not Supported 00:09:35.473 Namespace Sharing Capabilities: Private 00:09:35.473 Size (in LBAs): 1310720 (5GiB) 00:09:35.473 Capacity (in LBAs): 1310720 (5GiB) 00:09:35.473 Utilization (in LBAs): 1310720 (5GiB) 00:09:35.473 Thin Provisioning: Not Supported 00:09:35.473 Per-NS Atomic Units: No 00:09:35.473 Maximum Single Source Range Length: 128 00:09:35.473 Maximum Copy Length: 128 00:09:35.473 Maximum Source Range Count: 128 00:09:35.473 NGUID/EUI64 Never Reused: No 00:09:35.473 Namespace Write Protected: No 00:09:35.473 Number of LBA Formats: 8 00:09:35.473 Current LBA Format: LBA Format #04 00:09:35.473 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:35.473 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:35.473 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:35.473 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:35.473 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:35.473 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:35.473 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:35.473 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:35.473 00:09:35.473 ===================================================== 00:09:35.473 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:35.473 ===================================================== 00:09:35.473 Controller Capabilities/Features 00:09:35.473 ================================ 00:09:35.473 Vendor ID: 1b36 00:09:35.473 Subsystem Vendor ID: 1af4 00:09:35.473 Serial Number: 12343 00:09:35.473 Model Number: QEMU NVMe Ctrl 00:09:35.473 Firmware Version: 8.0.0 00:09:35.473 Recommended Arb Burst: 6 00:09:35.473 IEEE OUI Identifier: 00 54 52 00:09:35.473 Multi-path I/O 00:09:35.473 May have multiple subsystem ports: No 00:09:35.473 May have multiple controllers: Yes 00:09:35.473 Associated with SR-IOV VF: No 00:09:35.473 Max Data Transfer Size: 524288 00:09:35.473 Max Number of Namespaces: 256 00:09:35.473 Max Number of I/O Queues: 64 00:09:35.473 NVMe Specification Version (VS): 1.4 00:09:35.473 NVMe Specification Version (Identify): 1.4 00:09:35.473 Maximum Queue Entries: 2048 00:09:35.473 Contiguous Queues Required: Yes 00:09:35.473 Arbitration Mechanisms Supported 00:09:35.473 Weighted Round Robin: Not Supported 00:09:35.473 Vendor Specific: Not Supported 00:09:35.473 Reset Timeout: 7500 ms 00:09:35.473 Doorbell Stride: 4 bytes 00:09:35.473 NVM Subsystem Reset: Not Supported 00:09:35.473 Command Sets Supported 00:09:35.473 NVM Command Set: Supported 00:09:35.473 Boot Partition: Not Supported 00:09:35.473 Memory Page Size Minimum: 4096 bytes 00:09:35.474 Memory Page Size Maximum: 65536 bytes 00:09:35.474 Persistent Memory Region: Not Supported 00:09:35.474 Optional Asynchronous Events Supported 00:09:35.474 Namespace Attribute Notices: Supported 00:09:35.474 Firmware Activation Notices: Not Supported 00:09:35.474 ANA Change Notices: Not Supported 00:09:35.474 PLE Aggregate Log Change Notices: Not Supported 00:09:35.474 LBA Status Info Alert Notices: Not Supported 00:09:35.474 EGE Aggregate Log Change Notices: Not Supported 00:09:35.474 Normal NVM Subsystem Shutdown event: Not Supported 00:09:35.474 Zone Descriptor Change Notices: Not Supported 00:09:35.474 Discovery Log Change Notices: Not Supported 00:09:35.474 Controller Attributes 00:09:35.474 128-bit Host Identifier: Not Supported 00:09:35.474 Non-Operational Permissive Mode: Not Supported 00:09:35.474 NVM Sets: Not Supported 00:09:35.474 Read Recovery Levels: Not Supported 00:09:35.474 Endurance Groups: Supported 00:09:35.474 Predictable Latency Mode: Not Supported 00:09:35.474 Traffic Based Keep ALive: Not Supported 00:09:35.474 Namespace Granularity: Not Supported 00:09:35.474 SQ Associations: Not Supported 00:09:35.474 UUID List: Not Supported 00:09:35.474 Multi-Domain Subsystem: Not Supported 00:09:35.474 Fixed Capacity Management: Not Supported 00:09:35.474 Variable Capacity Management: Not Supported 00:09:35.474 Delete Endurance Group: Not Supported 00:09:35.474 Delete NVM Set: Not Supported 00:09:35.474 Extended LBA Formats Supported: Supported 00:09:35.474 Flexible Data Placement Supported: Supported 00:09:35.474 00:09:35.474 Controller Memory Buffer Support 00:09:35.474 ================================ 00:09:35.474 Supported: No 00:09:35.474 00:09:35.474 Persistent Memory Region Support 00:09:35.474 ================================ 00:09:35.474 Supported: No 00:09:35.474 00:09:35.474 Admin Command Set Attributes 00:09:35.474 ============================ 00:09:35.474 Security Send/Receive: Not Supported 00:09:35.474 Format NVM: Supported 00:09:35.474 Firmware Activate/Download: Not Supported 00:09:35.474 Namespace Management: Supported 00:09:35.474 Device Self-Test: Not Supported 00:09:35.474 Directives: Supported 00:09:35.474 NVMe-MI: Not Supported 00:09:35.474 Virtualization Management: Not Supported 00:09:35.474 Doorbell Buffer Config: Supported 00:09:35.474 Get LBA Status Capability: Not Supported 00:09:35.474 Command & Feature Lockdown Capability: Not Supported 00:09:35.474 Abort Command Limit: 4 00:09:35.474 Async Event Request Limit: 4 00:09:35.474 Number of Firmware Slots: N/A 00:09:35.474 Firmware Slot 1 Read-Only: N/A 00:09:35.474 Firmware Activation Without Reset: N/A 00:09:35.474 Multiple Update Detection Support: N/A 00:09:35.474 Firmware Update Granularity: No Information Provided 00:09:35.474 Per-Namespace SMART Log: Yes 00:09:35.474 Asymmetric Namespace Access Log Page: Not Supported 00:09:35.474 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:35.474 Command Effects Log Page: Supported 00:09:35.474 Get Log Page Extended Data: Supported 00:09:35.474 Telemetry Log Pages: Not Supported 00:09:35.474 Persistent Event Log Pages: Not Supported 00:09:35.474 Supported Log Pages Log Page: May Support 00:09:35.474 Commands Supported & Effects Log Page: Not Supported 00:09:35.474 Feature Identifiers & Effects Log Page:May Support 00:09:35.474 NVMe-MI Commands & Effects Log Page: May Support 00:09:35.474 Data Area 4 for Telemetry Log: Not Supported 00:09:35.474 Error Log Page Entries Supported: 1 00:09:35.474 Keep Alive: Not Supported 00:09:35.474 00:09:35.474 NVM Command Set Attributes 00:09:35.474 ========================== 00:09:35.474 Submission Queue Entry Size 00:09:35.474 Max: 64 00:09:35.474 Min: 64 00:09:35.474 Completion Queue Entry Size 00:09:35.474 Max: 16 00:09:35.474 Min: 16 00:09:35.474 Number of Namespaces: 256 00:09:35.474 Compare Command: Supported 00:09:35.474 Write Uncorrectable Command: Not Supported 00:09:35.474 Dataset Management Command: Supported 00:09:35.474 Write Zeroes Command: Supported 00:09:35.474 Set Features Save Field: Supported 00:09:35.474 Reservations: Not Supported 00:09:35.474 Timestamp: Supported 00:09:35.474 Copy: Supported 00:09:35.474 Volatile Write Cache: Present 00:09:35.474 Atomic Write Unit (Normal): 1 00:09:35.474 Atomic Write Unit (PFail): 1 00:09:35.474 Atomic Compare & Write Unit: 1 00:09:35.474 Fused Compare & Write: Not Supported 00:09:35.474 Scatter-Gather List 00:09:35.474 SGL Command Set: Supported 00:09:35.474 SGL Keyed: Not Supported 00:09:35.474 SGL Bit Bucket Descriptor: Not Supported 00:09:35.474 SGL Metadata Pointer: Not Supported 00:09:35.474 Oversized SGL: Not Supported 00:09:35.474 SGL Metadata Address: Not Supported 00:09:35.474 SGL Offset: Not Supported 00:09:35.474 Transport SGL Data Block: Not Supported 00:09:35.474 Replay Protected Memory Block: Not Supported 00:09:35.474 00:09:35.474 Firmware Slot Information 00:09:35.474 ========================= 00:09:35.474 Active slot: 1 00:09:35.474 Slot 1 Firmware Revision: 1.0 00:09:35.474 00:09:35.474 00:09:35.474 Commands Supported and Effects 00:09:35.474 ============================== 00:09:35.474 Admin Commands 00:09:35.474 -------------- 00:09:35.474 Delete I/O Submission Queue (00h): Supported 00:09:35.474 Create I/O Submission Queue (01h): Supported 00:09:35.474 Get Log Page (02h): Supported 00:09:35.474 Delete I/O Completion Queue (04h): Supported 00:09:35.474 Create I/O Completion Queue (05h): Supported 00:09:35.474 Identify (06h): Supported 00:09:35.474 Abort (08h): Supported 00:09:35.474 Set Features (09h): Supported 00:09:35.474 Get Features (0Ah): Supported 00:09:35.474 Asynchronous Event Request (0Ch): Supported 00:09:35.474 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:35.474 Directive Send (19h): Supported 00:09:35.474 Directive Receive (1Ah): Supported 00:09:35.474 Virtualization Management (1Ch): Supported 00:09:35.474 Doorbell Buffer Config (7Ch): Supported 00:09:35.474 Format NVM (80h): Supported LBA-Change 00:09:35.474 I/O Commands 00:09:35.474 ------------ 00:09:35.474 Flush (00h): Supported LBA-Change 00:09:35.474 Write (01h): Supported LBA-Change 00:09:35.474 Read (02h): Supported 00:09:35.474 Compare (05h): Supported 00:09:35.474 Write Zeroes (08h): Supported LBA-Change 00:09:35.474 Dataset Management (09h): Supported LBA-Change 00:09:35.474 Unknown (0Ch): Supported 00:09:35.474 Unknown (12h): Supported 00:09:35.474 Copy (19h): Supported LBA-Change 00:09:35.474 Unknown (1Dh): Supported LBA-Change 00:09:35.474 00:09:35.474 Error Log 00:09:35.474 ========= 00:09:35.474 00:09:35.474 Arbitration 00:09:35.474 =========== 00:09:35.474 Arbitration Burst: no limit 00:09:35.474 00:09:35.474 Power Management 00:09:35.474 ================ 00:09:35.474 Number of Power States: 1 00:09:35.474 Current Power State: Power State #0 00:09:35.474 Power State #0: 00:09:35.474 Max Power: 25.00 W 00:09:35.474 Non-Operational State: Operational 00:09:35.474 Entry Latency: 16 microseconds 00:09:35.474 Exit Latency: 4 microseconds 00:09:35.474 Relative Read Throughput: 0 00:09:35.474 Relative Read Latency: 0 00:09:35.474 Relative Write Throughput: 0 00:09:35.474 Relative Write Latency: 0 00:09:35.474 Idle Power: Not Reported 00:09:35.474 Active Power: Not Reported 00:09:35.474 Non-Operational Permissive Mode: Not Supported 00:09:35.474 00:09:35.474 Health Information 00:09:35.474 ================== 00:09:35.474 Critical Warnings: 00:09:35.474 Available Spare Space: OK 00:09:35.474 Temperature: OK 00:09:35.474 Device Reliability: OK 00:09:35.474 Read Only: No 00:09:35.474 Volatile Memory Backup: OK 00:09:35.474 Current Temperature: 323 Kelvin (50 Celsius) 00:09:35.474 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:35.474 Available Spare: 0% 00:09:35.474 Available Spare Threshold: 0% 00:09:35.474 Life Percentage Used: 0% 00:09:35.474 Data Units Read: 1277 00:09:35.474 Data Units Written: 591 00:09:35.474 Host Read Commands: 62307 00:09:35.474 Host Write Commands: 30697 00:09:35.474 Controller Busy Time: 0 minutes 00:09:35.474 Power Cycles: 0 00:09:35.474 Power On Hours: 0 hours 00:09:35.474 Unsafe Shutdowns: 0 00:09:35.474 Unrecoverable Media Errors: 0 00:09:35.474 Lifetime Error Log Entries: 0 00:09:35.474 Warning Temperature Time: 0 minutes 00:09:35.474 Critical Temperature Time: 0 minutes 00:09:35.474 00:09:35.474 Number of Queues 00:09:35.474 ================ 00:09:35.474 Number of I/O Submission Queues: 64 00:09:35.474 Number of I/O Completion Queues: 64 00:09:35.474 00:09:35.474 ZNS Specific Controller Data 00:09:35.474 ============================ 00:09:35.474 Zone Append Size Limit: 0 00:09:35.474 00:09:35.474 00:09:35.474 Active Namespaces 00:09:35.474 ================= 00:09:35.474 Namespace ID:1 00:09:35.474 Error Recovery Timeout: Unlimited 00:09:35.474 Command Set Identifier: NVM (00h) 00:09:35.474 Deallocate: Supported 00:09:35.474 Deal[2024-10-16 20:16:50.308582] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63654 terminated unexpected 00:09:35.474 located/Unwritten Error: Supported 00:09:35.474 Deallocated Read Value: All 0x00 00:09:35.474 Deallocate in Write Zeroes: Not Supported 00:09:35.474 Deallocated Guard Field: 0xFFFF 00:09:35.475 Flush: Supported 00:09:35.475 Reservation: Not Supported 00:09:35.475 Namespace Sharing Capabilities: Multiple Controllers 00:09:35.475 Size (in LBAs): 262144 (1GiB) 00:09:35.475 Capacity (in LBAs): 262144 (1GiB) 00:09:35.475 Utilization (in LBAs): 262144 (1GiB) 00:09:35.475 Thin Provisioning: Not Supported 00:09:35.475 Per-NS Atomic Units: No 00:09:35.475 Maximum Single Source Range Length: 128 00:09:35.475 Maximum Copy Length: 128 00:09:35.475 Maximum Source Range Count: 128 00:09:35.475 NGUID/EUI64 Never Reused: No 00:09:35.475 Namespace Write Protected: No 00:09:35.475 Endurance group ID: 1 00:09:35.475 Number of LBA Formats: 8 00:09:35.475 Current LBA Format: LBA Format #04 00:09:35.475 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:35.475 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:35.475 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:35.475 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:35.475 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:35.475 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:35.475 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:35.475 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:35.475 00:09:35.475 Get Feature FDP: 00:09:35.475 ================ 00:09:35.475 Enabled: Yes 00:09:35.475 FDP configuration index: 0 00:09:35.475 00:09:35.475 FDP configurations log page 00:09:35.475 =========================== 00:09:35.475 Number of FDP configurations: 1 00:09:35.475 Version: 0 00:09:35.475 Size: 112 00:09:35.475 FDP Configuration Descriptor: 0 00:09:35.475 Descriptor Size: 96 00:09:35.475 Reclaim Group Identifier format: 2 00:09:35.475 FDP Volatile Write Cache: Not Present 00:09:35.475 FDP Configuration: Valid 00:09:35.475 Vendor Specific Size: 0 00:09:35.475 Number of Reclaim Groups: 2 00:09:35.475 Number of Recalim Unit Handles: 8 00:09:35.475 Max Placement Identifiers: 128 00:09:35.475 Number of Namespaces Suppprted: 256 00:09:35.475 Reclaim unit Nominal Size: 6000000 bytes 00:09:35.475 Estimated Reclaim Unit Time Limit: Not Reported 00:09:35.475 RUH Desc #000: RUH Type: Initially Isolated 00:09:35.475 RUH Desc #001: RUH Type: Initially Isolated 00:09:35.475 RUH Desc #002: RUH Type: Initially Isolated 00:09:35.475 RUH Desc #003: RUH Type: Initially Isolated 00:09:35.475 RUH Desc #004: RUH Type: Initially Isolated 00:09:35.475 RUH Desc #005: RUH Type: Initially Isolated 00:09:35.475 RUH Desc #006: RUH Type: Initially Isolated 00:09:35.475 RUH Desc #007: RUH Type: Initially Isolated 00:09:35.475 00:09:35.475 FDP reclaim unit handle usage log page 00:09:35.475 ====================================== 00:09:35.475 Number of Reclaim Unit Handles: 8 00:09:35.475 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:35.475 RUH Usage Desc #001: RUH Attributes: Unused 00:09:35.475 RUH Usage Desc #002: RUH Attributes: Unused 00:09:35.475 RUH Usage Desc #003: RUH Attributes: Unused 00:09:35.475 RUH Usage Desc #004: RUH Attributes: Unused 00:09:35.475 RUH Usage Desc #005: RUH Attributes: Unused 00:09:35.475 RUH Usage Desc #006: RUH Attributes: Unused 00:09:35.475 RUH Usage Desc #007: RUH Attributes: Unused 00:09:35.475 00:09:35.475 FDP statistics log page 00:09:35.475 ======================= 00:09:35.475 Host bytes with metadata written: 363941888 00:09:35.475 Media bytes with metadata written: 363999232 00:09:35.475 Media bytes erased: 0 00:09:35.475 00:09:35.475 FDP events log page 00:09:35.475 =================== 00:09:35.475 Number of FDP events: 0 00:09:35.475 00:09:35.475 ===================================================== 00:09:35.475 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:35.475 ===================================================== 00:09:35.475 Controller Capabilities/Features 00:09:35.475 ================================ 00:09:35.475 Vendor ID: 1b36 00:09:35.475 Subsystem Vendor ID: 1af4 00:09:35.475 Serial Number: 12340 00:09:35.475 Model Number: QEMU NVMe Ctrl 00:09:35.475 Firmware Version: 8.0.0 00:09:35.475 Recommended Arb Burst: 6 00:09:35.475 IEEE OUI Identifier: 00 54 52 00:09:35.475 Multi-path I/O 00:09:35.475 May have multiple subsystem ports: No 00:09:35.475 May have multiple controllers: No 00:09:35.475 Associated with SR-IOV VF: No 00:09:35.475 Max Data Transfer Size: 524288 00:09:35.475 Max Number of Namespaces: 256 00:09:35.475 Max Number of I/O Queues: 64 00:09:35.475 NVMe Specification Version (VS): 1.4 00:09:35.475 NVMe Specification Version (Identify): 1.4 00:09:35.475 Maximum Queue Entries: 2048 00:09:35.475 Contiguous Queues Required: Yes 00:09:35.475 Arbitration Mechanisms Supported 00:09:35.475 Weighted Round Robin: Not Supported 00:09:35.475 Vendor Specific: Not Supported 00:09:35.475 Reset Timeout: 7500 ms 00:09:35.475 Doorbell Stride: 4 bytes 00:09:35.475 NVM Subsystem Reset: Not Supported 00:09:35.475 Command Sets Supported 00:09:35.475 NVM Command Set: Supported 00:09:35.475 Boot Partition: Not Supported 00:09:35.475 Memory Page Size Minimum: 4096 bytes 00:09:35.475 Memory Page Size Maximum: 65536 bytes 00:09:35.475 Persistent Memory Region: Not Supported 00:09:35.475 Optional Asynchronous Events Supported 00:09:35.475 Namespace Attribute Notices: Supported 00:09:35.475 Firmware Activation Notices: Not Supported 00:09:35.475 ANA Change Notices: Not Supported 00:09:35.475 PLE Aggregate Log Change Notices: Not Supported 00:09:35.475 LBA Status Info Alert Notices: Not Supported 00:09:35.475 EGE Aggregate Log Change Notices: Not Supported 00:09:35.475 Normal NVM Subsystem Shutdown event: Not Supported 00:09:35.475 Zone Descriptor Change Notices: Not Supported 00:09:35.475 Discovery Log Change Notices: Not Supported 00:09:35.475 Controller Attributes 00:09:35.475 128-bit Host Identifier: Not Supported 00:09:35.475 Non-Operational Permissive Mode: Not Supported 00:09:35.475 NVM Sets: Not Supported 00:09:35.475 Read Recovery Levels: Not Supported 00:09:35.475 Endurance Groups: Not Supported 00:09:35.475 Predictable Latency Mode: Not Supported 00:09:35.475 Traffic Based Keep ALive: Not Supported 00:09:35.475 Namespace Granularity: Not Supported 00:09:35.475 SQ Associations: Not Supported 00:09:35.475 UUID List: Not Supported 00:09:35.475 Multi-Domain Subsystem: Not Supported 00:09:35.475 Fixed Capacity Management: Not Supported 00:09:35.475 Variable Capacity Management: Not Supported 00:09:35.475 Delete Endurance Group: Not Supported 00:09:35.475 Delete NVM Set: Not Supported 00:09:35.475 Extended LBA Formats Supported: Supported 00:09:35.475 Flexible Data Placement Supported: Not Supported 00:09:35.475 00:09:35.475 Controller Memory Buffer Support 00:09:35.475 ================================ 00:09:35.475 Supported: No 00:09:35.475 00:09:35.475 Persistent Memory Region Support 00:09:35.475 ================================ 00:09:35.475 Supported: No 00:09:35.475 00:09:35.475 Admin Command Set Attributes 00:09:35.475 ============================ 00:09:35.475 Security Send/Receive: Not Supported 00:09:35.475 Format NVM: Supported 00:09:35.475 Firmware Activate/Download: Not Supported 00:09:35.475 Namespace Management: Supported 00:09:35.475 Device Self-Test: Not Supported 00:09:35.475 Directives: Supported 00:09:35.475 NVMe-MI: Not Supported 00:09:35.475 Virtualization Management: Not Supported 00:09:35.475 Doorbell Buffer Config: Supported 00:09:35.475 Get LBA Status Capability: Not Supported 00:09:35.475 Command & Feature Lockdown Capability: Not Supported 00:09:35.475 Abort Command Limit: 4 00:09:35.475 Async Event Request Limit: 4 00:09:35.475 Number of Firmware Slots: N/A 00:09:35.475 Firmware Slot 1 Read-Only: N/A 00:09:35.475 Firmware Activation Without Reset: N/A 00:09:35.475 Multiple Update Detection Support: N/A 00:09:35.475 Firmware Update Granularity: No Information Provided 00:09:35.475 Per-Namespace SMART Log: Yes 00:09:35.475 Asymmetric Namespace Access Log Page: Not Supported 00:09:35.475 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:35.475 Command Effects Log Page: Supported 00:09:35.475 Get Log Page Extended Data: Supported 00:09:35.475 Telemetry Log Pages: Not Supported 00:09:35.475 Persistent Event Log Pages: Not Supported 00:09:35.475 Supported Log Pages Log Page: May Support 00:09:35.475 Commands Supported & Effects Log Page: Not Supported 00:09:35.475 Feature Identifiers & Effects Log Page:May Support 00:09:35.475 NVMe-MI Commands & Effects Log Page: May Support 00:09:35.475 Data Area 4 for Telemetry Log: Not Supported 00:09:35.475 Error Log Page Entries Supported: 1 00:09:35.475 Keep Alive: Not Supported 00:09:35.475 00:09:35.475 NVM Command Set Attributes 00:09:35.475 ========================== 00:09:35.475 Submission Queue Entry Size 00:09:35.475 Max: 64 00:09:35.475 Min: 64 00:09:35.475 Completion Queue Entry Size 00:09:35.475 Max: 16 00:09:35.475 Min: 16 00:09:35.475 Number of Namespaces: 256 00:09:35.475 Compare Command: Supported 00:09:35.475 Write Uncorrectable Command: Not Supported 00:09:35.475 Dataset Management Command: Supported 00:09:35.475 Write Zeroes Command: Supported 00:09:35.475 Set Features Save Field: Supported 00:09:35.475 Reservations: Not Supported 00:09:35.475 Timestamp: Supported 00:09:35.475 Copy: Supported 00:09:35.475 Volatile Write Cache: Present 00:09:35.475 Atomic Write Unit (Normal): 1 00:09:35.475 Atomic Write Unit (PFail): 1 00:09:35.475 Atomic Compare & Write Unit: 1 00:09:35.475 Fused Compare & Write: Not Supported 00:09:35.475 Scatter-Gather List 00:09:35.475 SGL Command Set: Supported 00:09:35.475 SGL Keyed: Not Supported 00:09:35.475 SGL Bit Bucket Descriptor: Not Supported 00:09:35.476 SGL Metadata Pointer: Not Supported 00:09:35.476 Oversized SGL: Not Supported 00:09:35.476 SGL Metadata Address: Not Supported 00:09:35.476 SGL Offset: Not Supported 00:09:35.476 Transport SGL Data Block: Not Supported 00:09:35.476 Replay Protected Memory Block: Not Supported 00:09:35.476 00:09:35.476 Firmware Slot Information 00:09:35.476 ========================= 00:09:35.476 Active slot: 1 00:09:35.476 Slot 1 Firmware Revision: 1.0 00:09:35.476 00:09:35.476 00:09:35.476 Commands Supported and Effects 00:09:35.476 ============================== 00:09:35.476 Admin Commands 00:09:35.476 -------------- 00:09:35.476 Delete I/O Submission Queue (00h): Supported 00:09:35.476 Create I/O Submission Queue (01h): Supported 00:09:35.476 Get Log Page (02h): Supported 00:09:35.476 Delete I/O Completion Queue (04h): Supported 00:09:35.476 Create I/O Completion Queue (05h): Supported 00:09:35.476 Identify (06h): Supported 00:09:35.476 Abort (08h): Supported 00:09:35.476 Set Features (09h): Supported 00:09:35.476 Get Features (0Ah): Supported 00:09:35.476 Asynchronous Event Request (0Ch): Supported 00:09:35.476 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:35.476 Directive Send (19h): Supported 00:09:35.476 Directive Receive (1Ah): Supported 00:09:35.476 Virtualization Management (1Ch): Supported 00:09:35.476 Doorbell Buffer Config (7Ch): Supported 00:09:35.476 Format NVM (80h): Supported LBA-Change 00:09:35.476 I/O Commands 00:09:35.476 ------------ 00:09:35.476 Flush (00h): Supported LBA-Change 00:09:35.476 Write (01h): Supported LBA-Change 00:09:35.476 Read (02h): Supported 00:09:35.476 Compare (05h): Supported 00:09:35.476 Write Zeroes (08h): Supported LBA-Change 00:09:35.476 Dataset Management (09h): Supported LBA-Change 00:09:35.476 Unknown (0Ch): Supported 00:09:35.476 Unknown (12h): Supported 00:09:35.476 Copy (19h): Supported LBA-Change 00:09:35.476 Unknown (1Dh): Supported LBA-Change 00:09:35.476 00:09:35.476 Error Log 00:09:35.476 ========= 00:09:35.476 00:09:35.476 Arbitration 00:09:35.476 =========== 00:09:35.476 Arbitration Burst: no limit 00:09:35.476 00:09:35.476 Power Management 00:09:35.476 ================ 00:09:35.476 Number of Power States: 1 00:09:35.476 Current Power State: Power State #0 00:09:35.476 Power State #0: 00:09:35.476 Max Power: 25.00 W 00:09:35.476 Non-Operational State: Operational 00:09:35.476 Entry Latency: 16 microseconds 00:09:35.476 Exit Latency: 4 microseconds 00:09:35.476 Relative Read Throughput: 0 00:09:35.476 Relative Read Latency: 0 00:09:35.476 Relative Write Throughput: 0 00:09:35.476 Relative Write Latency: 0 00:09:35.476 Idle Power: Not Reported 00:09:35.476 Active Power: Not Reported 00:09:35.476 Non-Operational Permissive Mode: Not Supported 00:09:35.476 00:09:35.476 Health Information 00:09:35.476 ================== 00:09:35.476 Critical Warnings: 00:09:35.476 Available Spare Space: OK 00:09:35.476 Temperature: OK 00:09:35.476 Device Reliability: OK 00:09:35.476 Read Only: No 00:09:35.476 Volatile Memory Backup: OK 00:09:35.476 Current Temperature: 323 Kelvin (50 Celsius) 00:09:35.476 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:35.476 Available Spare: 0% 00:09:35.476 Available Spare Threshold: 0% 00:09:35.476 Life Percentage Used: 0% 00:09:35.476 Data Units Read: 1804 00:09:35.476 Data Units Written: 833 00:09:35.476 Host Read Commands: 90311 00:09:35.476 Host Write Commands: [2024-10-16 20:16:50.310683] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63654 terminated unexpected 00:09:35.476 44908 00:09:35.476 Controller Busy Time: 0 minutes 00:09:35.476 Power Cycles: 0 00:09:35.476 Power On Hours: 0 hours 00:09:35.476 Unsafe Shutdowns: 0 00:09:35.476 Unrecoverable Media Errors: 0 00:09:35.476 Lifetime Error Log Entries: 0 00:09:35.476 Warning Temperature Time: 0 minutes 00:09:35.476 Critical Temperature Time: 0 minutes 00:09:35.476 00:09:35.476 Number of Queues 00:09:35.476 ================ 00:09:35.476 Number of I/O Submission Queues: 64 00:09:35.476 Number of I/O Completion Queues: 64 00:09:35.476 00:09:35.476 ZNS Specific Controller Data 00:09:35.476 ============================ 00:09:35.476 Zone Append Size Limit: 0 00:09:35.476 00:09:35.476 00:09:35.476 Active Namespaces 00:09:35.476 ================= 00:09:35.476 Namespace ID:1 00:09:35.476 Error Recovery Timeout: Unlimited 00:09:35.476 Command Set Identifier: NVM (00h) 00:09:35.476 Deallocate: Supported 00:09:35.476 Deallocated/Unwritten Error: Supported 00:09:35.476 Deallocated Read Value: All 0x00 00:09:35.476 Deallocate in Write Zeroes: Not Supported 00:09:35.476 Deallocated Guard Field: 0xFFFF 00:09:35.476 Flush: Supported 00:09:35.476 Reservation: Not Supported 00:09:35.476 Metadata Transferred as: Separate Metadata Buffer 00:09:35.476 Namespace Sharing Capabilities: Private 00:09:35.476 Size (in LBAs): 1548666 (5GiB) 00:09:35.476 Capacity (in LBAs): 1548666 (5GiB) 00:09:35.476 Utilization (in LBAs): 1548666 (5GiB) 00:09:35.476 Thin Provisioning: Not Supported 00:09:35.476 Per-NS Atomic Units: No 00:09:35.476 Maximum Single Source Range Length: 128 00:09:35.476 Maximum Copy Length: 128 00:09:35.476 Maximum Source Range Count: 128 00:09:35.476 NGUID/EUI64 Never Reused: No 00:09:35.476 Namespace Write Protected: No 00:09:35.476 Number of LBA Formats: 8 00:09:35.476 Current LBA Format: LBA Format #07 00:09:35.476 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:35.476 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:35.476 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:35.476 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:35.476 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:35.476 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:35.476 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:35.476 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:35.476 00:09:35.476 ===================================================== 00:09:35.476 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:35.476 ===================================================== 00:09:35.476 Controller Capabilities/Features 00:09:35.476 ================================ 00:09:35.476 Vendor ID: 1b36 00:09:35.476 Subsystem Vendor ID: 1af4 00:09:35.476 Serial Number: 12342 00:09:35.476 Model Number: QEMU NVMe Ctrl 00:09:35.476 Firmware Version: 8.0.0 00:09:35.476 Recommended Arb Burst: 6 00:09:35.476 IEEE OUI Identifier: 00 54 52 00:09:35.476 Multi-path I/O 00:09:35.476 May have multiple subsystem ports: No 00:09:35.476 May have multiple controllers: No 00:09:35.476 Associated with SR-IOV VF: No 00:09:35.476 Max Data Transfer Size: 524288 00:09:35.476 Max Number of Namespaces: 256 00:09:35.476 Max Number of I/O Queues: 64 00:09:35.476 NVMe Specification Version (VS): 1.4 00:09:35.476 NVMe Specification Version (Identify): 1.4 00:09:35.476 Maximum Queue Entries: 2048 00:09:35.476 Contiguous Queues Required: Yes 00:09:35.476 Arbitration Mechanisms Supported 00:09:35.476 Weighted Round Robin: Not Supported 00:09:35.476 Vendor Specific: Not Supported 00:09:35.476 Reset Timeout: 7500 ms 00:09:35.476 Doorbell Stride: 4 bytes 00:09:35.476 NVM Subsystem Reset: Not Supported 00:09:35.476 Command Sets Supported 00:09:35.476 NVM Command Set: Supported 00:09:35.476 Boot Partition: Not Supported 00:09:35.476 Memory Page Size Minimum: 4096 bytes 00:09:35.476 Memory Page Size Maximum: 65536 bytes 00:09:35.476 Persistent Memory Region: Not Supported 00:09:35.476 Optional Asynchronous Events Supported 00:09:35.476 Namespace Attribute Notices: Supported 00:09:35.476 Firmware Activation Notices: Not Supported 00:09:35.476 ANA Change Notices: Not Supported 00:09:35.476 PLE Aggregate Log Change Notices: Not Supported 00:09:35.476 LBA Status Info Alert Notices: Not Supported 00:09:35.476 EGE Aggregate Log Change Notices: Not Supported 00:09:35.476 Normal NVM Subsystem Shutdown event: Not Supported 00:09:35.476 Zone Descriptor Change Notices: Not Supported 00:09:35.476 Discovery Log Change Notices: Not Supported 00:09:35.476 Controller Attributes 00:09:35.476 128-bit Host Identifier: Not Supported 00:09:35.476 Non-Operational Permissive Mode: Not Supported 00:09:35.476 NVM Sets: Not Supported 00:09:35.476 Read Recovery Levels: Not Supported 00:09:35.476 Endurance Groups: Not Supported 00:09:35.476 Predictable Latency Mode: Not Supported 00:09:35.476 Traffic Based Keep ALive: Not Supported 00:09:35.476 Namespace Granularity: Not Supported 00:09:35.476 SQ Associations: Not Supported 00:09:35.476 UUID List: Not Supported 00:09:35.476 Multi-Domain Subsystem: Not Supported 00:09:35.476 Fixed Capacity Management: Not Supported 00:09:35.476 Variable Capacity Management: Not Supported 00:09:35.476 Delete Endurance Group: Not Supported 00:09:35.476 Delete NVM Set: Not Supported 00:09:35.476 Extended LBA Formats Supported: Supported 00:09:35.476 Flexible Data Placement Supported: Not Supported 00:09:35.476 00:09:35.476 Controller Memory Buffer Support 00:09:35.476 ================================ 00:09:35.476 Supported: No 00:09:35.476 00:09:35.476 Persistent Memory Region Support 00:09:35.476 ================================ 00:09:35.476 Supported: No 00:09:35.476 00:09:35.476 Admin Command Set Attributes 00:09:35.476 ============================ 00:09:35.476 Security Send/Receive: Not Supported 00:09:35.476 Format NVM: Supported 00:09:35.477 Firmware Activate/Download: Not Supported 00:09:35.477 Namespace Management: Supported 00:09:35.477 Device Self-Test: Not Supported 00:09:35.477 Directives: Supported 00:09:35.477 NVMe-MI: Not Supported 00:09:35.477 Virtualization Management: Not Supported 00:09:35.477 Doorbell Buffer Config: Supported 00:09:35.477 Get LBA Status Capability: Not Supported 00:09:35.477 Command & Feature Lockdown Capability: Not Supported 00:09:35.477 Abort Command Limit: 4 00:09:35.477 Async Event Request Limit: 4 00:09:35.477 Number of Firmware Slots: N/A 00:09:35.477 Firmware Slot 1 Read-Only: N/A 00:09:35.477 Firmware Activation Without Reset: N/A 00:09:35.477 Multiple Update Detection Support: N/A 00:09:35.477 Firmware Update Granularity: No Information Provided 00:09:35.477 Per-Namespace SMART Log: Yes 00:09:35.477 Asymmetric Namespace Access Log Page: Not Supported 00:09:35.477 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:35.477 Command Effects Log Page: Supported 00:09:35.477 Get Log Page Extended Data: Supported 00:09:35.477 Telemetry Log Pages: Not Supported 00:09:35.477 Persistent Event Log Pages: Not Supported 00:09:35.477 Supported Log Pages Log Page: May Support 00:09:35.477 Commands Supported & Effects Log Page: Not Supported 00:09:35.477 Feature Identifiers & Effects Log Page:May Support 00:09:35.477 NVMe-MI Commands & Effects Log Page: May Support 00:09:35.477 Data Area 4 for Telemetry Log: Not Supported 00:09:35.477 Error Log Page Entries Supported: 1 00:09:35.477 Keep Alive: Not Supported 00:09:35.477 00:09:35.477 NVM Command Set Attributes 00:09:35.477 ========================== 00:09:35.477 Submission Queue Entry Size 00:09:35.477 Max: 64 00:09:35.477 Min: 64 00:09:35.477 Completion Queue Entry Size 00:09:35.477 Max: 16 00:09:35.477 Min: 16 00:09:35.477 Number of Namespaces: 256 00:09:35.477 Compare Command: Supported 00:09:35.477 Write Uncorrectable Command: Not Supported 00:09:35.477 Dataset Management Command: Supported 00:09:35.477 Write Zeroes Command: Supported 00:09:35.477 Set Features Save Field: Supported 00:09:35.477 Reservations: Not Supported 00:09:35.477 Timestamp: Supported 00:09:35.477 Copy: Supported 00:09:35.477 Volatile Write Cache: Present 00:09:35.477 Atomic Write Unit (Normal): 1 00:09:35.477 Atomic Write Unit (PFail): 1 00:09:35.477 Atomic Compare & Write Unit: 1 00:09:35.477 Fused Compare & Write: Not Supported 00:09:35.477 Scatter-Gather List 00:09:35.477 SGL Command Set: Supported 00:09:35.477 SGL Keyed: Not Supported 00:09:35.477 SGL Bit Bucket Descriptor: Not Supported 00:09:35.477 SGL Metadata Pointer: Not Supported 00:09:35.477 Oversized SGL: Not Supported 00:09:35.477 SGL Metadata Address: Not Supported 00:09:35.477 SGL Offset: Not Supported 00:09:35.477 Transport SGL Data Block: Not Supported 00:09:35.477 Replay Protected Memory Block: Not Supported 00:09:35.477 00:09:35.477 Firmware Slot Information 00:09:35.477 ========================= 00:09:35.477 Active slot: 1 00:09:35.477 Slot 1 Firmware Revision: 1.0 00:09:35.477 00:09:35.477 00:09:35.477 Commands Supported and Effects 00:09:35.477 ============================== 00:09:35.477 Admin Commands 00:09:35.477 -------------- 00:09:35.477 Delete I/O Submission Queue (00h): Supported 00:09:35.477 Create I/O Submission Queue (01h): Supported 00:09:35.477 Get Log Page (02h): Supported 00:09:35.477 Delete I/O Completion Queue (04h): Supported 00:09:35.477 Create I/O Completion Queue (05h): Supported 00:09:35.477 Identify (06h): Supported 00:09:35.477 Abort (08h): Supported 00:09:35.477 Set Features (09h): Supported 00:09:35.477 Get Features (0Ah): Supported 00:09:35.477 Asynchronous Event Request (0Ch): Supported 00:09:35.477 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:35.477 Directive Send (19h): Supported 00:09:35.477 Directive Receive (1Ah): Supported 00:09:35.477 Virtualization Management (1Ch): Supported 00:09:35.477 Doorbell Buffer Config (7Ch): Supported 00:09:35.477 Format NVM (80h): Supported LBA-Change 00:09:35.477 I/O Commands 00:09:35.477 ------------ 00:09:35.477 Flush (00h): Supported LBA-Change 00:09:35.477 Write (01h): Supported LBA-Change 00:09:35.477 Read (02h): Supported 00:09:35.477 Compare (05h): Supported 00:09:35.477 Write Zeroes (08h): Supported LBA-Change 00:09:35.477 Dataset Management (09h): Supported LBA-Change 00:09:35.477 Unknown (0Ch): Supported 00:09:35.477 Unknown (12h): Supported 00:09:35.477 Copy (19h): Supported LBA-Change 00:09:35.477 Unknown (1Dh): Supported LBA-Change 00:09:35.477 00:09:35.477 Error Log 00:09:35.477 ========= 00:09:35.477 00:09:35.477 Arbitration 00:09:35.477 =========== 00:09:35.477 Arbitration Burst: no limit 00:09:35.477 00:09:35.477 Power Management 00:09:35.477 ================ 00:09:35.477 Number of Power States: 1 00:09:35.477 Current Power State: Power State #0 00:09:35.477 Power State #0: 00:09:35.477 Max Power: 25.00 W 00:09:35.477 Non-Operational State: Operational 00:09:35.477 Entry Latency: 16 microseconds 00:09:35.477 Exit Latency: 4 microseconds 00:09:35.477 Relative Read Throughput: 0 00:09:35.477 Relative Read Latency: 0 00:09:35.477 Relative Write Throughput: 0 00:09:35.477 Relative Write Latency: 0 00:09:35.477 Idle Power: Not Reported 00:09:35.477 Active Power: Not Reported 00:09:35.477 Non-Operational Permissive Mode: Not Supported 00:09:35.477 00:09:35.477 Health Information 00:09:35.477 ================== 00:09:35.477 Critical Warnings: 00:09:35.477 Available Spare Space: OK 00:09:35.477 Temperature: OK 00:09:35.477 Device Reliability: OK 00:09:35.477 Read Only: No 00:09:35.477 Volatile Memory Backup: OK 00:09:35.477 Current Temperature: 323 Kelvin (50 Celsius) 00:09:35.477 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:35.477 Available Spare: 0% 00:09:35.477 Available Spare Threshold: 0% 00:09:35.477 Life Percentage Used: 0% 00:09:35.477 Data Units Read: 3636 00:09:35.477 Data Units Written: 1688 00:09:35.477 Host Read Commands: 185552 00:09:35.477 Host Write Commands: 91357 00:09:35.477 Controller Busy Time: 0 minutes 00:09:35.477 Power Cycles: 0 00:09:35.477 Power On Hours: 0 hours 00:09:35.477 Unsafe Shutdowns: 0 00:09:35.477 Unrecoverable Media Errors: 0 00:09:35.477 Lifetime Error Log Entries: 0 00:09:35.477 Warning Temperature Time: 0 minutes 00:09:35.477 Critical Temperature Time: 0 minutes 00:09:35.477 00:09:35.477 Number of Queues 00:09:35.477 ================ 00:09:35.477 Number of I/O Submission Queues: 64 00:09:35.477 Number of I/O Completion Queues: 64 00:09:35.477 00:09:35.477 ZNS Specific Controller Data 00:09:35.477 ============================ 00:09:35.477 Zone Append Size Limit: 0 00:09:35.477 00:09:35.477 00:09:35.477 Active Namespaces 00:09:35.477 ================= 00:09:35.477 Namespace ID:1 00:09:35.477 Error Recovery Timeout: Unlimited 00:09:35.477 Command Set Identifier: NVM (00h) 00:09:35.477 Deallocate: Supported 00:09:35.477 Deallocated/Unwritten Error: Supported 00:09:35.477 Deallocated Read Value: All 0x00 00:09:35.477 Deallocate in Write Zeroes: Not Supported 00:09:35.477 Deallocated Guard Field: 0xFFFF 00:09:35.477 Flush: Supported 00:09:35.477 Reservation: Not Supported 00:09:35.477 Namespace Sharing Capabilities: Private 00:09:35.477 Size (in LBAs): 1048576 (4GiB) 00:09:35.477 Capacity (in LBAs): 1048576 (4GiB) 00:09:35.477 Utilization (in LBAs): 1048576 (4GiB) 00:09:35.477 Thin Provisioning: Not Supported 00:09:35.477 Per-NS Atomic Units: No 00:09:35.477 Maximum Single Source Range Length: 128 00:09:35.477 Maximum Copy Length: 128 00:09:35.477 Maximum Source Range Count: 128 00:09:35.477 NGUID/EUI64 Never Reused: No 00:09:35.477 Namespace Write Protected: No 00:09:35.477 Number of LBA Formats: 8 00:09:35.478 Current LBA Format: LBA Format #04 00:09:35.478 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:35.478 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:35.478 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:35.478 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:35.478 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:35.478 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:35.478 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:35.478 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:35.478 00:09:35.478 Namespace ID:2 00:09:35.478 Error Recovery Timeout: Unlimited 00:09:35.478 Command Set Identifier: NVM (00h) 00:09:35.478 Deallocate: Supported 00:09:35.478 Deallocated/Unwritten Error: Supported 00:09:35.478 Deallocated Read Value: All 0x00 00:09:35.478 Deallocate in Write Zeroes: Not Supported 00:09:35.478 Deallocated Guard Field: 0xFFFF 00:09:35.478 Flush: Supported 00:09:35.478 Reservation: Not Supported 00:09:35.478 Namespace Sharing Capabilities: Private 00:09:35.478 Size (in LBAs): 1048576 (4GiB) 00:09:35.478 Capacity (in LBAs): 1048576 (4GiB) 00:09:35.478 Utilization (in LBAs): 1048576 (4GiB) 00:09:35.478 Thin Provisioning: Not Supported 00:09:35.478 Per-NS Atomic Units: No 00:09:35.478 Maximum Single Source Range Length: 128 00:09:35.478 Maximum Copy Length: 128 00:09:35.478 Maximum Source Range Count: 128 00:09:35.478 NGUID/EUI64 Never Reused: No 00:09:35.478 Namespace Write Protected: No 00:09:35.478 Number of LBA Formats: 8 00:09:35.478 Current LBA Format: LBA Format #04 00:09:35.478 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:35.478 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:35.478 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:35.478 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:35.478 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:35.478 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:35.478 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:35.478 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:35.478 00:09:35.478 Namespace ID:3 00:09:35.478 Error Recovery Timeout: Unlimited 00:09:35.478 Command Set Identifier: NVM (00h) 00:09:35.478 Deallocate: Supported 00:09:35.478 Deallocated/Unwritten Error: Supported 00:09:35.478 Deallocated Read Value: All 0x00 00:09:35.478 Deallocate in Write Zeroes: Not Supported 00:09:35.478 Deallocated Guard Field: 0xFFFF 00:09:35.478 Flush: Supported 00:09:35.478 Reservation: Not Supported 00:09:35.478 Namespace Sharing Capabilities: Private 00:09:35.478 Size (in LBAs): 1048576 (4GiB) 00:09:35.478 Capacity (in LBAs): 1048576 (4GiB) 00:09:35.478 Utilization (in LBAs): 1048576 (4GiB) 00:09:35.478 Thin Provisioning: Not Supported 00:09:35.478 Per-NS Atomic Units: No 00:09:35.478 Maximum Single Source Range Length: 128 00:09:35.478 Maximum Copy Length: 128 00:09:35.478 Maximum Source Range Count: 128 00:09:35.478 NGUID/EUI64 Never Reused: No 00:09:35.478 Namespace Write Protected: No 00:09:35.478 Number of LBA Formats: 8 00:09:35.478 Current LBA Format: LBA Format #04 00:09:35.478 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:35.478 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:35.478 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:35.478 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:35.478 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:35.478 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:35.478 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:35.478 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:35.478 00:09:35.478 20:16:50 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:35.478 20:16:50 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:35.739 ===================================================== 00:09:35.739 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:35.739 ===================================================== 00:09:35.739 Controller Capabilities/Features 00:09:35.739 ================================ 00:09:35.739 Vendor ID: 1b36 00:09:35.739 Subsystem Vendor ID: 1af4 00:09:35.739 Serial Number: 12340 00:09:35.739 Model Number: QEMU NVMe Ctrl 00:09:35.739 Firmware Version: 8.0.0 00:09:35.739 Recommended Arb Burst: 6 00:09:35.739 IEEE OUI Identifier: 00 54 52 00:09:35.739 Multi-path I/O 00:09:35.739 May have multiple subsystem ports: No 00:09:35.739 May have multiple controllers: No 00:09:35.739 Associated with SR-IOV VF: No 00:09:35.739 Max Data Transfer Size: 524288 00:09:35.739 Max Number of Namespaces: 256 00:09:35.739 Max Number of I/O Queues: 64 00:09:35.739 NVMe Specification Version (VS): 1.4 00:09:35.739 NVMe Specification Version (Identify): 1.4 00:09:35.739 Maximum Queue Entries: 2048 00:09:35.739 Contiguous Queues Required: Yes 00:09:35.739 Arbitration Mechanisms Supported 00:09:35.739 Weighted Round Robin: Not Supported 00:09:35.739 Vendor Specific: Not Supported 00:09:35.739 Reset Timeout: 7500 ms 00:09:35.739 Doorbell Stride: 4 bytes 00:09:35.740 NVM Subsystem Reset: Not Supported 00:09:35.740 Command Sets Supported 00:09:35.740 NVM Command Set: Supported 00:09:35.740 Boot Partition: Not Supported 00:09:35.740 Memory Page Size Minimum: 4096 bytes 00:09:35.740 Memory Page Size Maximum: 65536 bytes 00:09:35.740 Persistent Memory Region: Not Supported 00:09:35.740 Optional Asynchronous Events Supported 00:09:35.740 Namespace Attribute Notices: Supported 00:09:35.740 Firmware Activation Notices: Not Supported 00:09:35.740 ANA Change Notices: Not Supported 00:09:35.740 PLE Aggregate Log Change Notices: Not Supported 00:09:35.740 LBA Status Info Alert Notices: Not Supported 00:09:35.740 EGE Aggregate Log Change Notices: Not Supported 00:09:35.740 Normal NVM Subsystem Shutdown event: Not Supported 00:09:35.740 Zone Descriptor Change Notices: Not Supported 00:09:35.740 Discovery Log Change Notices: Not Supported 00:09:35.740 Controller Attributes 00:09:35.740 128-bit Host Identifier: Not Supported 00:09:35.740 Non-Operational Permissive Mode: Not Supported 00:09:35.740 NVM Sets: Not Supported 00:09:35.740 Read Recovery Levels: Not Supported 00:09:35.740 Endurance Groups: Not Supported 00:09:35.740 Predictable Latency Mode: Not Supported 00:09:35.740 Traffic Based Keep ALive: Not Supported 00:09:35.740 Namespace Granularity: Not Supported 00:09:35.740 SQ Associations: Not Supported 00:09:35.740 UUID List: Not Supported 00:09:35.740 Multi-Domain Subsystem: Not Supported 00:09:35.740 Fixed Capacity Management: Not Supported 00:09:35.740 Variable Capacity Management: Not Supported 00:09:35.740 Delete Endurance Group: Not Supported 00:09:35.740 Delete NVM Set: Not Supported 00:09:35.740 Extended LBA Formats Supported: Supported 00:09:35.740 Flexible Data Placement Supported: Not Supported 00:09:35.740 00:09:35.740 Controller Memory Buffer Support 00:09:35.740 ================================ 00:09:35.740 Supported: No 00:09:35.740 00:09:35.740 Persistent Memory Region Support 00:09:35.740 ================================ 00:09:35.740 Supported: No 00:09:35.740 00:09:35.740 Admin Command Set Attributes 00:09:35.740 ============================ 00:09:35.740 Security Send/Receive: Not Supported 00:09:35.740 Format NVM: Supported 00:09:35.740 Firmware Activate/Download: Not Supported 00:09:35.740 Namespace Management: Supported 00:09:35.740 Device Self-Test: Not Supported 00:09:35.740 Directives: Supported 00:09:35.740 NVMe-MI: Not Supported 00:09:35.740 Virtualization Management: Not Supported 00:09:35.740 Doorbell Buffer Config: Supported 00:09:35.740 Get LBA Status Capability: Not Supported 00:09:35.740 Command & Feature Lockdown Capability: Not Supported 00:09:35.740 Abort Command Limit: 4 00:09:35.740 Async Event Request Limit: 4 00:09:35.740 Number of Firmware Slots: N/A 00:09:35.740 Firmware Slot 1 Read-Only: N/A 00:09:35.740 Firmware Activation Without Reset: N/A 00:09:35.740 Multiple Update Detection Support: N/A 00:09:35.740 Firmware Update Granularity: No Information Provided 00:09:35.740 Per-Namespace SMART Log: Yes 00:09:35.740 Asymmetric Namespace Access Log Page: Not Supported 00:09:35.740 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:35.740 Command Effects Log Page: Supported 00:09:35.740 Get Log Page Extended Data: Supported 00:09:35.740 Telemetry Log Pages: Not Supported 00:09:35.740 Persistent Event Log Pages: Not Supported 00:09:35.740 Supported Log Pages Log Page: May Support 00:09:35.740 Commands Supported & Effects Log Page: Not Supported 00:09:35.740 Feature Identifiers & Effects Log Page:May Support 00:09:35.740 NVMe-MI Commands & Effects Log Page: May Support 00:09:35.740 Data Area 4 for Telemetry Log: Not Supported 00:09:35.740 Error Log Page Entries Supported: 1 00:09:35.740 Keep Alive: Not Supported 00:09:35.740 00:09:35.740 NVM Command Set Attributes 00:09:35.740 ========================== 00:09:35.740 Submission Queue Entry Size 00:09:35.740 Max: 64 00:09:35.740 Min: 64 00:09:35.740 Completion Queue Entry Size 00:09:35.740 Max: 16 00:09:35.740 Min: 16 00:09:35.740 Number of Namespaces: 256 00:09:35.740 Compare Command: Supported 00:09:35.740 Write Uncorrectable Command: Not Supported 00:09:35.740 Dataset Management Command: Supported 00:09:35.740 Write Zeroes Command: Supported 00:09:35.740 Set Features Save Field: Supported 00:09:35.740 Reservations: Not Supported 00:09:35.740 Timestamp: Supported 00:09:35.740 Copy: Supported 00:09:35.740 Volatile Write Cache: Present 00:09:35.740 Atomic Write Unit (Normal): 1 00:09:35.740 Atomic Write Unit (PFail): 1 00:09:35.740 Atomic Compare & Write Unit: 1 00:09:35.740 Fused Compare & Write: Not Supported 00:09:35.740 Scatter-Gather List 00:09:35.740 SGL Command Set: Supported 00:09:35.740 SGL Keyed: Not Supported 00:09:35.740 SGL Bit Bucket Descriptor: Not Supported 00:09:35.740 SGL Metadata Pointer: Not Supported 00:09:35.740 Oversized SGL: Not Supported 00:09:35.740 SGL Metadata Address: Not Supported 00:09:35.740 SGL Offset: Not Supported 00:09:35.740 Transport SGL Data Block: Not Supported 00:09:35.740 Replay Protected Memory Block: Not Supported 00:09:35.740 00:09:35.740 Firmware Slot Information 00:09:35.740 ========================= 00:09:35.740 Active slot: 1 00:09:35.740 Slot 1 Firmware Revision: 1.0 00:09:35.740 00:09:35.740 00:09:35.740 Commands Supported and Effects 00:09:35.740 ============================== 00:09:35.740 Admin Commands 00:09:35.740 -------------- 00:09:35.740 Delete I/O Submission Queue (00h): Supported 00:09:35.740 Create I/O Submission Queue (01h): Supported 00:09:35.740 Get Log Page (02h): Supported 00:09:35.740 Delete I/O Completion Queue (04h): Supported 00:09:35.740 Create I/O Completion Queue (05h): Supported 00:09:35.740 Identify (06h): Supported 00:09:35.740 Abort (08h): Supported 00:09:35.740 Set Features (09h): Supported 00:09:35.740 Get Features (0Ah): Supported 00:09:35.740 Asynchronous Event Request (0Ch): Supported 00:09:35.740 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:35.740 Directive Send (19h): Supported 00:09:35.740 Directive Receive (1Ah): Supported 00:09:35.740 Virtualization Management (1Ch): Supported 00:09:35.740 Doorbell Buffer Config (7Ch): Supported 00:09:35.740 Format NVM (80h): Supported LBA-Change 00:09:35.740 I/O Commands 00:09:35.740 ------------ 00:09:35.740 Flush (00h): Supported LBA-Change 00:09:35.740 Write (01h): Supported LBA-Change 00:09:35.740 Read (02h): Supported 00:09:35.740 Compare (05h): Supported 00:09:35.740 Write Zeroes (08h): Supported LBA-Change 00:09:35.740 Dataset Management (09h): Supported LBA-Change 00:09:35.740 Unknown (0Ch): Supported 00:09:35.740 Unknown (12h): Supported 00:09:35.740 Copy (19h): Supported LBA-Change 00:09:35.740 Unknown (1Dh): Supported LBA-Change 00:09:35.740 00:09:35.740 Error Log 00:09:35.740 ========= 00:09:35.740 00:09:35.740 Arbitration 00:09:35.740 =========== 00:09:35.740 Arbitration Burst: no limit 00:09:35.740 00:09:35.740 Power Management 00:09:35.740 ================ 00:09:35.740 Number of Power States: 1 00:09:35.740 Current Power State: Power State #0 00:09:35.740 Power State #0: 00:09:35.740 Max Power: 25.00 W 00:09:35.740 Non-Operational State: Operational 00:09:35.740 Entry Latency: 16 microseconds 00:09:35.740 Exit Latency: 4 microseconds 00:09:35.740 Relative Read Throughput: 0 00:09:35.740 Relative Read Latency: 0 00:09:35.740 Relative Write Throughput: 0 00:09:35.740 Relative Write Latency: 0 00:09:35.740 Idle Power: Not Reported 00:09:35.740 Active Power: Not Reported 00:09:35.740 Non-Operational Permissive Mode: Not Supported 00:09:35.740 00:09:35.740 Health Information 00:09:35.740 ================== 00:09:35.740 Critical Warnings: 00:09:35.740 Available Spare Space: OK 00:09:35.740 Temperature: OK 00:09:35.740 Device Reliability: OK 00:09:35.740 Read Only: No 00:09:35.740 Volatile Memory Backup: OK 00:09:35.740 Current Temperature: 323 Kelvin (50 Celsius) 00:09:35.740 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:35.740 Available Spare: 0% 00:09:35.740 Available Spare Threshold: 0% 00:09:35.740 Life Percentage Used: 0% 00:09:35.740 Data Units Read: 1804 00:09:35.740 Data Units Written: 833 00:09:35.740 Host Read Commands: 90311 00:09:35.740 Host Write Commands: 44908 00:09:35.740 Controller Busy Time: 0 minutes 00:09:35.740 Power Cycles: 0 00:09:35.740 Power On Hours: 0 hours 00:09:35.740 Unsafe Shutdowns: 0 00:09:35.740 Unrecoverable Media Errors: 0 00:09:35.740 Lifetime Error Log Entries: 0 00:09:35.740 Warning Temperature Time: 0 minutes 00:09:35.740 Critical Temperature Time: 0 minutes 00:09:35.740 00:09:35.740 Number of Queues 00:09:35.740 ================ 00:09:35.740 Number of I/O Submission Queues: 64 00:09:35.740 Number of I/O Completion Queues: 64 00:09:35.740 00:09:35.740 ZNS Specific Controller Data 00:09:35.740 ============================ 00:09:35.740 Zone Append Size Limit: 0 00:09:35.740 00:09:35.740 00:09:35.740 Active Namespaces 00:09:35.740 ================= 00:09:35.740 Namespace ID:1 00:09:35.740 Error Recovery Timeout: Unlimited 00:09:35.740 Command Set Identifier: NVM (00h) 00:09:35.740 Deallocate: Supported 00:09:35.740 Deallocated/Unwritten Error: Supported 00:09:35.740 Deallocated Read Value: All 0x00 00:09:35.741 Deallocate in Write Zeroes: Not Supported 00:09:35.741 Deallocated Guard Field: 0xFFFF 00:09:35.741 Flush: Supported 00:09:35.741 Reservation: Not Supported 00:09:35.741 Metadata Transferred as: Separate Metadata Buffer 00:09:35.741 Namespace Sharing Capabilities: Private 00:09:35.741 Size (in LBAs): 1548666 (5GiB) 00:09:35.741 Capacity (in LBAs): 1548666 (5GiB) 00:09:35.741 Utilization (in LBAs): 1548666 (5GiB) 00:09:35.741 Thin Provisioning: Not Supported 00:09:35.741 Per-NS Atomic Units: No 00:09:35.741 Maximum Single Source Range Length: 128 00:09:35.741 Maximum Copy Length: 128 00:09:35.741 Maximum Source Range Count: 128 00:09:35.741 NGUID/EUI64 Never Reused: No 00:09:35.741 Namespace Write Protected: No 00:09:35.741 Number of LBA Formats: 8 00:09:35.741 Current LBA Format: LBA Format #07 00:09:35.741 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:35.741 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:35.741 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:35.741 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:35.741 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:35.741 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:35.741 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:35.741 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:35.741 00:09:35.741 20:16:50 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:35.741 20:16:50 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:36.006 ===================================================== 00:09:36.006 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:36.006 ===================================================== 00:09:36.006 Controller Capabilities/Features 00:09:36.006 ================================ 00:09:36.006 Vendor ID: 1b36 00:09:36.006 Subsystem Vendor ID: 1af4 00:09:36.006 Serial Number: 12341 00:09:36.006 Model Number: QEMU NVMe Ctrl 00:09:36.006 Firmware Version: 8.0.0 00:09:36.006 Recommended Arb Burst: 6 00:09:36.006 IEEE OUI Identifier: 00 54 52 00:09:36.006 Multi-path I/O 00:09:36.006 May have multiple subsystem ports: No 00:09:36.006 May have multiple controllers: No 00:09:36.006 Associated with SR-IOV VF: No 00:09:36.006 Max Data Transfer Size: 524288 00:09:36.006 Max Number of Namespaces: 256 00:09:36.006 Max Number of I/O Queues: 64 00:09:36.006 NVMe Specification Version (VS): 1.4 00:09:36.006 NVMe Specification Version (Identify): 1.4 00:09:36.006 Maximum Queue Entries: 2048 00:09:36.006 Contiguous Queues Required: Yes 00:09:36.006 Arbitration Mechanisms Supported 00:09:36.006 Weighted Round Robin: Not Supported 00:09:36.006 Vendor Specific: Not Supported 00:09:36.006 Reset Timeout: 7500 ms 00:09:36.006 Doorbell Stride: 4 bytes 00:09:36.006 NVM Subsystem Reset: Not Supported 00:09:36.006 Command Sets Supported 00:09:36.006 NVM Command Set: Supported 00:09:36.006 Boot Partition: Not Supported 00:09:36.006 Memory Page Size Minimum: 4096 bytes 00:09:36.006 Memory Page Size Maximum: 65536 bytes 00:09:36.006 Persistent Memory Region: Not Supported 00:09:36.006 Optional Asynchronous Events Supported 00:09:36.006 Namespace Attribute Notices: Supported 00:09:36.006 Firmware Activation Notices: Not Supported 00:09:36.006 ANA Change Notices: Not Supported 00:09:36.006 PLE Aggregate Log Change Notices: Not Supported 00:09:36.006 LBA Status Info Alert Notices: Not Supported 00:09:36.006 EGE Aggregate Log Change Notices: Not Supported 00:09:36.006 Normal NVM Subsystem Shutdown event: Not Supported 00:09:36.006 Zone Descriptor Change Notices: Not Supported 00:09:36.006 Discovery Log Change Notices: Not Supported 00:09:36.006 Controller Attributes 00:09:36.006 128-bit Host Identifier: Not Supported 00:09:36.006 Non-Operational Permissive Mode: Not Supported 00:09:36.006 NVM Sets: Not Supported 00:09:36.006 Read Recovery Levels: Not Supported 00:09:36.006 Endurance Groups: Not Supported 00:09:36.006 Predictable Latency Mode: Not Supported 00:09:36.006 Traffic Based Keep ALive: Not Supported 00:09:36.006 Namespace Granularity: Not Supported 00:09:36.006 SQ Associations: Not Supported 00:09:36.006 UUID List: Not Supported 00:09:36.006 Multi-Domain Subsystem: Not Supported 00:09:36.006 Fixed Capacity Management: Not Supported 00:09:36.006 Variable Capacity Management: Not Supported 00:09:36.006 Delete Endurance Group: Not Supported 00:09:36.006 Delete NVM Set: Not Supported 00:09:36.006 Extended LBA Formats Supported: Supported 00:09:36.006 Flexible Data Placement Supported: Not Supported 00:09:36.006 00:09:36.006 Controller Memory Buffer Support 00:09:36.006 ================================ 00:09:36.006 Supported: No 00:09:36.006 00:09:36.006 Persistent Memory Region Support 00:09:36.006 ================================ 00:09:36.006 Supported: No 00:09:36.006 00:09:36.006 Admin Command Set Attributes 00:09:36.006 ============================ 00:09:36.006 Security Send/Receive: Not Supported 00:09:36.006 Format NVM: Supported 00:09:36.006 Firmware Activate/Download: Not Supported 00:09:36.006 Namespace Management: Supported 00:09:36.006 Device Self-Test: Not Supported 00:09:36.006 Directives: Supported 00:09:36.006 NVMe-MI: Not Supported 00:09:36.006 Virtualization Management: Not Supported 00:09:36.006 Doorbell Buffer Config: Supported 00:09:36.006 Get LBA Status Capability: Not Supported 00:09:36.006 Command & Feature Lockdown Capability: Not Supported 00:09:36.006 Abort Command Limit: 4 00:09:36.006 Async Event Request Limit: 4 00:09:36.006 Number of Firmware Slots: N/A 00:09:36.006 Firmware Slot 1 Read-Only: N/A 00:09:36.006 Firmware Activation Without Reset: N/A 00:09:36.006 Multiple Update Detection Support: N/A 00:09:36.006 Firmware Update Granularity: No Information Provided 00:09:36.006 Per-Namespace SMART Log: Yes 00:09:36.006 Asymmetric Namespace Access Log Page: Not Supported 00:09:36.006 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:36.006 Command Effects Log Page: Supported 00:09:36.006 Get Log Page Extended Data: Supported 00:09:36.006 Telemetry Log Pages: Not Supported 00:09:36.006 Persistent Event Log Pages: Not Supported 00:09:36.006 Supported Log Pages Log Page: May Support 00:09:36.006 Commands Supported & Effects Log Page: Not Supported 00:09:36.006 Feature Identifiers & Effects Log Page:May Support 00:09:36.006 NVMe-MI Commands & Effects Log Page: May Support 00:09:36.006 Data Area 4 for Telemetry Log: Not Supported 00:09:36.006 Error Log Page Entries Supported: 1 00:09:36.006 Keep Alive: Not Supported 00:09:36.006 00:09:36.006 NVM Command Set Attributes 00:09:36.006 ========================== 00:09:36.006 Submission Queue Entry Size 00:09:36.006 Max: 64 00:09:36.006 Min: 64 00:09:36.006 Completion Queue Entry Size 00:09:36.006 Max: 16 00:09:36.006 Min: 16 00:09:36.006 Number of Namespaces: 256 00:09:36.006 Compare Command: Supported 00:09:36.006 Write Uncorrectable Command: Not Supported 00:09:36.006 Dataset Management Command: Supported 00:09:36.006 Write Zeroes Command: Supported 00:09:36.006 Set Features Save Field: Supported 00:09:36.006 Reservations: Not Supported 00:09:36.006 Timestamp: Supported 00:09:36.006 Copy: Supported 00:09:36.006 Volatile Write Cache: Present 00:09:36.006 Atomic Write Unit (Normal): 1 00:09:36.006 Atomic Write Unit (PFail): 1 00:09:36.006 Atomic Compare & Write Unit: 1 00:09:36.006 Fused Compare & Write: Not Supported 00:09:36.006 Scatter-Gather List 00:09:36.006 SGL Command Set: Supported 00:09:36.006 SGL Keyed: Not Supported 00:09:36.006 SGL Bit Bucket Descriptor: Not Supported 00:09:36.006 SGL Metadata Pointer: Not Supported 00:09:36.006 Oversized SGL: Not Supported 00:09:36.006 SGL Metadata Address: Not Supported 00:09:36.006 SGL Offset: Not Supported 00:09:36.006 Transport SGL Data Block: Not Supported 00:09:36.006 Replay Protected Memory Block: Not Supported 00:09:36.006 00:09:36.006 Firmware Slot Information 00:09:36.006 ========================= 00:09:36.006 Active slot: 1 00:09:36.006 Slot 1 Firmware Revision: 1.0 00:09:36.006 00:09:36.006 00:09:36.006 Commands Supported and Effects 00:09:36.006 ============================== 00:09:36.006 Admin Commands 00:09:36.006 -------------- 00:09:36.006 Delete I/O Submission Queue (00h): Supported 00:09:36.006 Create I/O Submission Queue (01h): Supported 00:09:36.006 Get Log Page (02h): Supported 00:09:36.006 Delete I/O Completion Queue (04h): Supported 00:09:36.006 Create I/O Completion Queue (05h): Supported 00:09:36.007 Identify (06h): Supported 00:09:36.007 Abort (08h): Supported 00:09:36.007 Set Features (09h): Supported 00:09:36.007 Get Features (0Ah): Supported 00:09:36.007 Asynchronous Event Request (0Ch): Supported 00:09:36.007 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:36.007 Directive Send (19h): Supported 00:09:36.007 Directive Receive (1Ah): Supported 00:09:36.007 Virtualization Management (1Ch): Supported 00:09:36.007 Doorbell Buffer Config (7Ch): Supported 00:09:36.007 Format NVM (80h): Supported LBA-Change 00:09:36.007 I/O Commands 00:09:36.007 ------------ 00:09:36.007 Flush (00h): Supported LBA-Change 00:09:36.007 Write (01h): Supported LBA-Change 00:09:36.007 Read (02h): Supported 00:09:36.007 Compare (05h): Supported 00:09:36.007 Write Zeroes (08h): Supported LBA-Change 00:09:36.007 Dataset Management (09h): Supported LBA-Change 00:09:36.007 Unknown (0Ch): Supported 00:09:36.007 Unknown (12h): Supported 00:09:36.007 Copy (19h): Supported LBA-Change 00:09:36.007 Unknown (1Dh): Supported LBA-Change 00:09:36.007 00:09:36.007 Error Log 00:09:36.007 ========= 00:09:36.007 00:09:36.007 Arbitration 00:09:36.007 =========== 00:09:36.007 Arbitration Burst: no limit 00:09:36.007 00:09:36.007 Power Management 00:09:36.007 ================ 00:09:36.007 Number of Power States: 1 00:09:36.007 Current Power State: Power State #0 00:09:36.007 Power State #0: 00:09:36.007 Max Power: 25.00 W 00:09:36.007 Non-Operational State: Operational 00:09:36.007 Entry Latency: 16 microseconds 00:09:36.007 Exit Latency: 4 microseconds 00:09:36.007 Relative Read Throughput: 0 00:09:36.007 Relative Read Latency: 0 00:09:36.007 Relative Write Throughput: 0 00:09:36.007 Relative Write Latency: 0 00:09:36.007 Idle Power: Not Reported 00:09:36.007 Active Power: Not Reported 00:09:36.007 Non-Operational Permissive Mode: Not Supported 00:09:36.007 00:09:36.007 Health Information 00:09:36.007 ================== 00:09:36.007 Critical Warnings: 00:09:36.007 Available Spare Space: OK 00:09:36.007 Temperature: OK 00:09:36.007 Device Reliability: OK 00:09:36.007 Read Only: No 00:09:36.007 Volatile Memory Backup: OK 00:09:36.007 Current Temperature: 323 Kelvin (50 Celsius) 00:09:36.007 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:36.007 Available Spare: 0% 00:09:36.007 Available Spare Threshold: 0% 00:09:36.007 Life Percentage Used: 0% 00:09:36.007 Data Units Read: 1213 00:09:36.007 Data Units Written: 565 00:09:36.007 Host Read Commands: 61649 00:09:36.007 Host Write Commands: 30400 00:09:36.007 Controller Busy Time: 0 minutes 00:09:36.007 Power Cycles: 0 00:09:36.007 Power On Hours: 0 hours 00:09:36.007 Unsafe Shutdowns: 0 00:09:36.007 Unrecoverable Media Errors: 0 00:09:36.007 Lifetime Error Log Entries: 0 00:09:36.007 Warning Temperature Time: 0 minutes 00:09:36.007 Critical Temperature Time: 0 minutes 00:09:36.007 00:09:36.007 Number of Queues 00:09:36.007 ================ 00:09:36.007 Number of I/O Submission Queues: 64 00:09:36.007 Number of I/O Completion Queues: 64 00:09:36.007 00:09:36.007 ZNS Specific Controller Data 00:09:36.007 ============================ 00:09:36.007 Zone Append Size Limit: 0 00:09:36.007 00:09:36.007 00:09:36.007 Active Namespaces 00:09:36.007 ================= 00:09:36.007 Namespace ID:1 00:09:36.007 Error Recovery Timeout: Unlimited 00:09:36.007 Command Set Identifier: NVM (00h) 00:09:36.007 Deallocate: Supported 00:09:36.007 Deallocated/Unwritten Error: Supported 00:09:36.007 Deallocated Read Value: All 0x00 00:09:36.007 Deallocate in Write Zeroes: Not Supported 00:09:36.007 Deallocated Guard Field: 0xFFFF 00:09:36.007 Flush: Supported 00:09:36.007 Reservation: Not Supported 00:09:36.007 Namespace Sharing Capabilities: Private 00:09:36.007 Size (in LBAs): 1310720 (5GiB) 00:09:36.007 Capacity (in LBAs): 1310720 (5GiB) 00:09:36.007 Utilization (in LBAs): 1310720 (5GiB) 00:09:36.007 Thin Provisioning: Not Supported 00:09:36.007 Per-NS Atomic Units: No 00:09:36.007 Maximum Single Source Range Length: 128 00:09:36.007 Maximum Copy Length: 128 00:09:36.007 Maximum Source Range Count: 128 00:09:36.007 NGUID/EUI64 Never Reused: No 00:09:36.007 Namespace Write Protected: No 00:09:36.007 Number of LBA Formats: 8 00:09:36.007 Current LBA Format: LBA Format #04 00:09:36.007 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:36.007 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:36.007 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:36.007 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:36.007 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:36.007 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:36.007 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:36.007 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:36.007 00:09:36.007 20:16:50 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:36.007 20:16:50 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:36.282 ===================================================== 00:09:36.282 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:36.282 ===================================================== 00:09:36.282 Controller Capabilities/Features 00:09:36.282 ================================ 00:09:36.282 Vendor ID: 1b36 00:09:36.282 Subsystem Vendor ID: 1af4 00:09:36.282 Serial Number: 12342 00:09:36.282 Model Number: QEMU NVMe Ctrl 00:09:36.282 Firmware Version: 8.0.0 00:09:36.282 Recommended Arb Burst: 6 00:09:36.282 IEEE OUI Identifier: 00 54 52 00:09:36.282 Multi-path I/O 00:09:36.282 May have multiple subsystem ports: No 00:09:36.282 May have multiple controllers: No 00:09:36.282 Associated with SR-IOV VF: No 00:09:36.282 Max Data Transfer Size: 524288 00:09:36.282 Max Number of Namespaces: 256 00:09:36.282 Max Number of I/O Queues: 64 00:09:36.282 NVMe Specification Version (VS): 1.4 00:09:36.282 NVMe Specification Version (Identify): 1.4 00:09:36.282 Maximum Queue Entries: 2048 00:09:36.282 Contiguous Queues Required: Yes 00:09:36.282 Arbitration Mechanisms Supported 00:09:36.282 Weighted Round Robin: Not Supported 00:09:36.282 Vendor Specific: Not Supported 00:09:36.282 Reset Timeout: 7500 ms 00:09:36.282 Doorbell Stride: 4 bytes 00:09:36.282 NVM Subsystem Reset: Not Supported 00:09:36.282 Command Sets Supported 00:09:36.282 NVM Command Set: Supported 00:09:36.282 Boot Partition: Not Supported 00:09:36.282 Memory Page Size Minimum: 4096 bytes 00:09:36.282 Memory Page Size Maximum: 65536 bytes 00:09:36.282 Persistent Memory Region: Not Supported 00:09:36.282 Optional Asynchronous Events Supported 00:09:36.282 Namespace Attribute Notices: Supported 00:09:36.282 Firmware Activation Notices: Not Supported 00:09:36.282 ANA Change Notices: Not Supported 00:09:36.282 PLE Aggregate Log Change Notices: Not Supported 00:09:36.282 LBA Status Info Alert Notices: Not Supported 00:09:36.282 EGE Aggregate Log Change Notices: Not Supported 00:09:36.282 Normal NVM Subsystem Shutdown event: Not Supported 00:09:36.282 Zone Descriptor Change Notices: Not Supported 00:09:36.282 Discovery Log Change Notices: Not Supported 00:09:36.282 Controller Attributes 00:09:36.282 128-bit Host Identifier: Not Supported 00:09:36.282 Non-Operational Permissive Mode: Not Supported 00:09:36.282 NVM Sets: Not Supported 00:09:36.282 Read Recovery Levels: Not Supported 00:09:36.282 Endurance Groups: Not Supported 00:09:36.282 Predictable Latency Mode: Not Supported 00:09:36.282 Traffic Based Keep ALive: Not Supported 00:09:36.282 Namespace Granularity: Not Supported 00:09:36.282 SQ Associations: Not Supported 00:09:36.282 UUID List: Not Supported 00:09:36.282 Multi-Domain Subsystem: Not Supported 00:09:36.282 Fixed Capacity Management: Not Supported 00:09:36.282 Variable Capacity Management: Not Supported 00:09:36.282 Delete Endurance Group: Not Supported 00:09:36.282 Delete NVM Set: Not Supported 00:09:36.282 Extended LBA Formats Supported: Supported 00:09:36.282 Flexible Data Placement Supported: Not Supported 00:09:36.282 00:09:36.282 Controller Memory Buffer Support 00:09:36.282 ================================ 00:09:36.282 Supported: No 00:09:36.282 00:09:36.282 Persistent Memory Region Support 00:09:36.282 ================================ 00:09:36.282 Supported: No 00:09:36.282 00:09:36.282 Admin Command Set Attributes 00:09:36.282 ============================ 00:09:36.282 Security Send/Receive: Not Supported 00:09:36.282 Format NVM: Supported 00:09:36.282 Firmware Activate/Download: Not Supported 00:09:36.283 Namespace Management: Supported 00:09:36.283 Device Self-Test: Not Supported 00:09:36.283 Directives: Supported 00:09:36.283 NVMe-MI: Not Supported 00:09:36.283 Virtualization Management: Not Supported 00:09:36.283 Doorbell Buffer Config: Supported 00:09:36.283 Get LBA Status Capability: Not Supported 00:09:36.283 Command & Feature Lockdown Capability: Not Supported 00:09:36.283 Abort Command Limit: 4 00:09:36.283 Async Event Request Limit: 4 00:09:36.283 Number of Firmware Slots: N/A 00:09:36.283 Firmware Slot 1 Read-Only: N/A 00:09:36.283 Firmware Activation Without Reset: N/A 00:09:36.283 Multiple Update Detection Support: N/A 00:09:36.283 Firmware Update Granularity: No Information Provided 00:09:36.283 Per-Namespace SMART Log: Yes 00:09:36.283 Asymmetric Namespace Access Log Page: Not Supported 00:09:36.283 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:36.283 Command Effects Log Page: Supported 00:09:36.283 Get Log Page Extended Data: Supported 00:09:36.283 Telemetry Log Pages: Not Supported 00:09:36.283 Persistent Event Log Pages: Not Supported 00:09:36.283 Supported Log Pages Log Page: May Support 00:09:36.283 Commands Supported & Effects Log Page: Not Supported 00:09:36.283 Feature Identifiers & Effects Log Page:May Support 00:09:36.283 NVMe-MI Commands & Effects Log Page: May Support 00:09:36.283 Data Area 4 for Telemetry Log: Not Supported 00:09:36.283 Error Log Page Entries Supported: 1 00:09:36.283 Keep Alive: Not Supported 00:09:36.283 00:09:36.283 NVM Command Set Attributes 00:09:36.283 ========================== 00:09:36.283 Submission Queue Entry Size 00:09:36.283 Max: 64 00:09:36.283 Min: 64 00:09:36.283 Completion Queue Entry Size 00:09:36.283 Max: 16 00:09:36.283 Min: 16 00:09:36.283 Number of Namespaces: 256 00:09:36.283 Compare Command: Supported 00:09:36.283 Write Uncorrectable Command: Not Supported 00:09:36.283 Dataset Management Command: Supported 00:09:36.283 Write Zeroes Command: Supported 00:09:36.283 Set Features Save Field: Supported 00:09:36.283 Reservations: Not Supported 00:09:36.283 Timestamp: Supported 00:09:36.283 Copy: Supported 00:09:36.283 Volatile Write Cache: Present 00:09:36.283 Atomic Write Unit (Normal): 1 00:09:36.283 Atomic Write Unit (PFail): 1 00:09:36.283 Atomic Compare & Write Unit: 1 00:09:36.283 Fused Compare & Write: Not Supported 00:09:36.283 Scatter-Gather List 00:09:36.283 SGL Command Set: Supported 00:09:36.283 SGL Keyed: Not Supported 00:09:36.283 SGL Bit Bucket Descriptor: Not Supported 00:09:36.283 SGL Metadata Pointer: Not Supported 00:09:36.283 Oversized SGL: Not Supported 00:09:36.283 SGL Metadata Address: Not Supported 00:09:36.283 SGL Offset: Not Supported 00:09:36.283 Transport SGL Data Block: Not Supported 00:09:36.283 Replay Protected Memory Block: Not Supported 00:09:36.283 00:09:36.283 Firmware Slot Information 00:09:36.283 ========================= 00:09:36.283 Active slot: 1 00:09:36.283 Slot 1 Firmware Revision: 1.0 00:09:36.283 00:09:36.283 00:09:36.283 Commands Supported and Effects 00:09:36.283 ============================== 00:09:36.283 Admin Commands 00:09:36.283 -------------- 00:09:36.283 Delete I/O Submission Queue (00h): Supported 00:09:36.283 Create I/O Submission Queue (01h): Supported 00:09:36.283 Get Log Page (02h): Supported 00:09:36.283 Delete I/O Completion Queue (04h): Supported 00:09:36.283 Create I/O Completion Queue (05h): Supported 00:09:36.283 Identify (06h): Supported 00:09:36.283 Abort (08h): Supported 00:09:36.283 Set Features (09h): Supported 00:09:36.283 Get Features (0Ah): Supported 00:09:36.283 Asynchronous Event Request (0Ch): Supported 00:09:36.283 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:36.283 Directive Send (19h): Supported 00:09:36.283 Directive Receive (1Ah): Supported 00:09:36.283 Virtualization Management (1Ch): Supported 00:09:36.283 Doorbell Buffer Config (7Ch): Supported 00:09:36.283 Format NVM (80h): Supported LBA-Change 00:09:36.283 I/O Commands 00:09:36.283 ------------ 00:09:36.283 Flush (00h): Supported LBA-Change 00:09:36.283 Write (01h): Supported LBA-Change 00:09:36.283 Read (02h): Supported 00:09:36.283 Compare (05h): Supported 00:09:36.283 Write Zeroes (08h): Supported LBA-Change 00:09:36.283 Dataset Management (09h): Supported LBA-Change 00:09:36.283 Unknown (0Ch): Supported 00:09:36.283 Unknown (12h): Supported 00:09:36.283 Copy (19h): Supported LBA-Change 00:09:36.283 Unknown (1Dh): Supported LBA-Change 00:09:36.283 00:09:36.283 Error Log 00:09:36.283 ========= 00:09:36.283 00:09:36.283 Arbitration 00:09:36.283 =========== 00:09:36.283 Arbitration Burst: no limit 00:09:36.283 00:09:36.283 Power Management 00:09:36.283 ================ 00:09:36.283 Number of Power States: 1 00:09:36.283 Current Power State: Power State #0 00:09:36.283 Power State #0: 00:09:36.283 Max Power: 25.00 W 00:09:36.283 Non-Operational State: Operational 00:09:36.283 Entry Latency: 16 microseconds 00:09:36.283 Exit Latency: 4 microseconds 00:09:36.283 Relative Read Throughput: 0 00:09:36.283 Relative Read Latency: 0 00:09:36.283 Relative Write Throughput: 0 00:09:36.283 Relative Write Latency: 0 00:09:36.283 Idle Power: Not Reported 00:09:36.283 Active Power: Not Reported 00:09:36.283 Non-Operational Permissive Mode: Not Supported 00:09:36.283 00:09:36.283 Health Information 00:09:36.283 ================== 00:09:36.283 Critical Warnings: 00:09:36.283 Available Spare Space: OK 00:09:36.283 Temperature: OK 00:09:36.283 Device Reliability: OK 00:09:36.283 Read Only: No 00:09:36.283 Volatile Memory Backup: OK 00:09:36.283 Current Temperature: 323 Kelvin (50 Celsius) 00:09:36.283 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:36.283 Available Spare: 0% 00:09:36.283 Available Spare Threshold: 0% 00:09:36.283 Life Percentage Used: 0% 00:09:36.283 Data Units Read: 3636 00:09:36.283 Data Units Written: 1688 00:09:36.283 Host Read Commands: 185552 00:09:36.283 Host Write Commands: 91357 00:09:36.283 Controller Busy Time: 0 minutes 00:09:36.283 Power Cycles: 0 00:09:36.283 Power On Hours: 0 hours 00:09:36.283 Unsafe Shutdowns: 0 00:09:36.283 Unrecoverable Media Errors: 0 00:09:36.283 Lifetime Error Log Entries: 0 00:09:36.283 Warning Temperature Time: 0 minutes 00:09:36.283 Critical Temperature Time: 0 minutes 00:09:36.283 00:09:36.283 Number of Queues 00:09:36.283 ================ 00:09:36.283 Number of I/O Submission Queues: 64 00:09:36.283 Number of I/O Completion Queues: 64 00:09:36.283 00:09:36.283 ZNS Specific Controller Data 00:09:36.283 ============================ 00:09:36.283 Zone Append Size Limit: 0 00:09:36.283 00:09:36.283 00:09:36.283 Active Namespaces 00:09:36.283 ================= 00:09:36.283 Namespace ID:1 00:09:36.283 Error Recovery Timeout: Unlimited 00:09:36.283 Command Set Identifier: NVM (00h) 00:09:36.283 Deallocate: Supported 00:09:36.283 Deallocated/Unwritten Error: Supported 00:09:36.283 Deallocated Read Value: All 0x00 00:09:36.283 Deallocate in Write Zeroes: Not Supported 00:09:36.283 Deallocated Guard Field: 0xFFFF 00:09:36.283 Flush: Supported 00:09:36.283 Reservation: Not Supported 00:09:36.283 Namespace Sharing Capabilities: Private 00:09:36.283 Size (in LBAs): 1048576 (4GiB) 00:09:36.283 Capacity (in LBAs): 1048576 (4GiB) 00:09:36.283 Utilization (in LBAs): 1048576 (4GiB) 00:09:36.283 Thin Provisioning: Not Supported 00:09:36.283 Per-NS Atomic Units: No 00:09:36.283 Maximum Single Source Range Length: 128 00:09:36.283 Maximum Copy Length: 128 00:09:36.283 Maximum Source Range Count: 128 00:09:36.283 NGUID/EUI64 Never Reused: No 00:09:36.283 Namespace Write Protected: No 00:09:36.283 Number of LBA Formats: 8 00:09:36.283 Current LBA Format: LBA Format #04 00:09:36.283 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:36.283 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:36.283 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:36.283 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:36.283 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:36.283 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:36.283 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:36.283 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:36.283 00:09:36.283 Namespace ID:2 00:09:36.283 Error Recovery Timeout: Unlimited 00:09:36.283 Command Set Identifier: NVM (00h) 00:09:36.283 Deallocate: Supported 00:09:36.283 Deallocated/Unwritten Error: Supported 00:09:36.283 Deallocated Read Value: All 0x00 00:09:36.283 Deallocate in Write Zeroes: Not Supported 00:09:36.283 Deallocated Guard Field: 0xFFFF 00:09:36.283 Flush: Supported 00:09:36.283 Reservation: Not Supported 00:09:36.283 Namespace Sharing Capabilities: Private 00:09:36.283 Size (in LBAs): 1048576 (4GiB) 00:09:36.283 Capacity (in LBAs): 1048576 (4GiB) 00:09:36.283 Utilization (in LBAs): 1048576 (4GiB) 00:09:36.283 Thin Provisioning: Not Supported 00:09:36.283 Per-NS Atomic Units: No 00:09:36.283 Maximum Single Source Range Length: 128 00:09:36.283 Maximum Copy Length: 128 00:09:36.283 Maximum Source Range Count: 128 00:09:36.283 NGUID/EUI64 Never Reused: No 00:09:36.283 Namespace Write Protected: No 00:09:36.283 Number of LBA Formats: 8 00:09:36.283 Current LBA Format: LBA Format #04 00:09:36.284 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:36.284 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:36.284 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:36.284 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:36.284 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:36.284 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:36.284 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:36.284 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:36.284 00:09:36.284 Namespace ID:3 00:09:36.284 Error Recovery Timeout: Unlimited 00:09:36.284 Command Set Identifier: NVM (00h) 00:09:36.284 Deallocate: Supported 00:09:36.284 Deallocated/Unwritten Error: Supported 00:09:36.284 Deallocated Read Value: All 0x00 00:09:36.284 Deallocate in Write Zeroes: Not Supported 00:09:36.284 Deallocated Guard Field: 0xFFFF 00:09:36.284 Flush: Supported 00:09:36.284 Reservation: Not Supported 00:09:36.284 Namespace Sharing Capabilities: Private 00:09:36.284 Size (in LBAs): 1048576 (4GiB) 00:09:36.284 Capacity (in LBAs): 1048576 (4GiB) 00:09:36.284 Utilization (in LBAs): 1048576 (4GiB) 00:09:36.284 Thin Provisioning: Not Supported 00:09:36.284 Per-NS Atomic Units: No 00:09:36.284 Maximum Single Source Range Length: 128 00:09:36.284 Maximum Copy Length: 128 00:09:36.284 Maximum Source Range Count: 128 00:09:36.284 NGUID/EUI64 Never Reused: No 00:09:36.284 Namespace Write Protected: No 00:09:36.284 Number of LBA Formats: 8 00:09:36.284 Current LBA Format: LBA Format #04 00:09:36.284 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:36.284 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:36.284 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:36.284 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:36.284 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:36.284 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:36.284 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:36.284 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:36.284 00:09:36.284 20:16:51 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:36.284 20:16:51 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:36.546 ===================================================== 00:09:36.546 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:36.546 ===================================================== 00:09:36.546 Controller Capabilities/Features 00:09:36.546 ================================ 00:09:36.546 Vendor ID: 1b36 00:09:36.546 Subsystem Vendor ID: 1af4 00:09:36.546 Serial Number: 12343 00:09:36.546 Model Number: QEMU NVMe Ctrl 00:09:36.546 Firmware Version: 8.0.0 00:09:36.546 Recommended Arb Burst: 6 00:09:36.546 IEEE OUI Identifier: 00 54 52 00:09:36.546 Multi-path I/O 00:09:36.546 May have multiple subsystem ports: No 00:09:36.546 May have multiple controllers: Yes 00:09:36.546 Associated with SR-IOV VF: No 00:09:36.546 Max Data Transfer Size: 524288 00:09:36.546 Max Number of Namespaces: 256 00:09:36.546 Max Number of I/O Queues: 64 00:09:36.546 NVMe Specification Version (VS): 1.4 00:09:36.546 NVMe Specification Version (Identify): 1.4 00:09:36.546 Maximum Queue Entries: 2048 00:09:36.546 Contiguous Queues Required: Yes 00:09:36.546 Arbitration Mechanisms Supported 00:09:36.546 Weighted Round Robin: Not Supported 00:09:36.546 Vendor Specific: Not Supported 00:09:36.546 Reset Timeout: 7500 ms 00:09:36.547 Doorbell Stride: 4 bytes 00:09:36.547 NVM Subsystem Reset: Not Supported 00:09:36.547 Command Sets Supported 00:09:36.547 NVM Command Set: Supported 00:09:36.547 Boot Partition: Not Supported 00:09:36.547 Memory Page Size Minimum: 4096 bytes 00:09:36.547 Memory Page Size Maximum: 65536 bytes 00:09:36.547 Persistent Memory Region: Not Supported 00:09:36.547 Optional Asynchronous Events Supported 00:09:36.547 Namespace Attribute Notices: Supported 00:09:36.547 Firmware Activation Notices: Not Supported 00:09:36.547 ANA Change Notices: Not Supported 00:09:36.547 PLE Aggregate Log Change Notices: Not Supported 00:09:36.547 LBA Status Info Alert Notices: Not Supported 00:09:36.547 EGE Aggregate Log Change Notices: Not Supported 00:09:36.547 Normal NVM Subsystem Shutdown event: Not Supported 00:09:36.547 Zone Descriptor Change Notices: Not Supported 00:09:36.547 Discovery Log Change Notices: Not Supported 00:09:36.547 Controller Attributes 00:09:36.547 128-bit Host Identifier: Not Supported 00:09:36.547 Non-Operational Permissive Mode: Not Supported 00:09:36.547 NVM Sets: Not Supported 00:09:36.547 Read Recovery Levels: Not Supported 00:09:36.547 Endurance Groups: Supported 00:09:36.547 Predictable Latency Mode: Not Supported 00:09:36.547 Traffic Based Keep ALive: Not Supported 00:09:36.547 Namespace Granularity: Not Supported 00:09:36.547 SQ Associations: Not Supported 00:09:36.547 UUID List: Not Supported 00:09:36.547 Multi-Domain Subsystem: Not Supported 00:09:36.547 Fixed Capacity Management: Not Supported 00:09:36.547 Variable Capacity Management: Not Supported 00:09:36.547 Delete Endurance Group: Not Supported 00:09:36.547 Delete NVM Set: Not Supported 00:09:36.547 Extended LBA Formats Supported: Supported 00:09:36.547 Flexible Data Placement Supported: Supported 00:09:36.547 00:09:36.547 Controller Memory Buffer Support 00:09:36.547 ================================ 00:09:36.547 Supported: No 00:09:36.547 00:09:36.547 Persistent Memory Region Support 00:09:36.547 ================================ 00:09:36.547 Supported: No 00:09:36.547 00:09:36.547 Admin Command Set Attributes 00:09:36.547 ============================ 00:09:36.547 Security Send/Receive: Not Supported 00:09:36.547 Format NVM: Supported 00:09:36.547 Firmware Activate/Download: Not Supported 00:09:36.547 Namespace Management: Supported 00:09:36.547 Device Self-Test: Not Supported 00:09:36.547 Directives: Supported 00:09:36.547 NVMe-MI: Not Supported 00:09:36.547 Virtualization Management: Not Supported 00:09:36.547 Doorbell Buffer Config: Supported 00:09:36.547 Get LBA Status Capability: Not Supported 00:09:36.547 Command & Feature Lockdown Capability: Not Supported 00:09:36.547 Abort Command Limit: 4 00:09:36.547 Async Event Request Limit: 4 00:09:36.547 Number of Firmware Slots: N/A 00:09:36.547 Firmware Slot 1 Read-Only: N/A 00:09:36.547 Firmware Activation Without Reset: N/A 00:09:36.547 Multiple Update Detection Support: N/A 00:09:36.547 Firmware Update Granularity: No Information Provided 00:09:36.547 Per-Namespace SMART Log: Yes 00:09:36.547 Asymmetric Namespace Access Log Page: Not Supported 00:09:36.547 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:36.547 Command Effects Log Page: Supported 00:09:36.547 Get Log Page Extended Data: Supported 00:09:36.547 Telemetry Log Pages: Not Supported 00:09:36.547 Persistent Event Log Pages: Not Supported 00:09:36.547 Supported Log Pages Log Page: May Support 00:09:36.547 Commands Supported & Effects Log Page: Not Supported 00:09:36.547 Feature Identifiers & Effects Log Page:May Support 00:09:36.547 NVMe-MI Commands & Effects Log Page: May Support 00:09:36.547 Data Area 4 for Telemetry Log: Not Supported 00:09:36.547 Error Log Page Entries Supported: 1 00:09:36.547 Keep Alive: Not Supported 00:09:36.547 00:09:36.547 NVM Command Set Attributes 00:09:36.547 ========================== 00:09:36.547 Submission Queue Entry Size 00:09:36.547 Max: 64 00:09:36.547 Min: 64 00:09:36.547 Completion Queue Entry Size 00:09:36.547 Max: 16 00:09:36.547 Min: 16 00:09:36.547 Number of Namespaces: 256 00:09:36.547 Compare Command: Supported 00:09:36.547 Write Uncorrectable Command: Not Supported 00:09:36.547 Dataset Management Command: Supported 00:09:36.547 Write Zeroes Command: Supported 00:09:36.547 Set Features Save Field: Supported 00:09:36.547 Reservations: Not Supported 00:09:36.547 Timestamp: Supported 00:09:36.547 Copy: Supported 00:09:36.547 Volatile Write Cache: Present 00:09:36.547 Atomic Write Unit (Normal): 1 00:09:36.547 Atomic Write Unit (PFail): 1 00:09:36.547 Atomic Compare & Write Unit: 1 00:09:36.547 Fused Compare & Write: Not Supported 00:09:36.547 Scatter-Gather List 00:09:36.547 SGL Command Set: Supported 00:09:36.547 SGL Keyed: Not Supported 00:09:36.547 SGL Bit Bucket Descriptor: Not Supported 00:09:36.547 SGL Metadata Pointer: Not Supported 00:09:36.547 Oversized SGL: Not Supported 00:09:36.547 SGL Metadata Address: Not Supported 00:09:36.547 SGL Offset: Not Supported 00:09:36.547 Transport SGL Data Block: Not Supported 00:09:36.547 Replay Protected Memory Block: Not Supported 00:09:36.547 00:09:36.547 Firmware Slot Information 00:09:36.547 ========================= 00:09:36.547 Active slot: 1 00:09:36.547 Slot 1 Firmware Revision: 1.0 00:09:36.547 00:09:36.547 00:09:36.547 Commands Supported and Effects 00:09:36.547 ============================== 00:09:36.547 Admin Commands 00:09:36.547 -------------- 00:09:36.547 Delete I/O Submission Queue (00h): Supported 00:09:36.547 Create I/O Submission Queue (01h): Supported 00:09:36.547 Get Log Page (02h): Supported 00:09:36.547 Delete I/O Completion Queue (04h): Supported 00:09:36.547 Create I/O Completion Queue (05h): Supported 00:09:36.547 Identify (06h): Supported 00:09:36.547 Abort (08h): Supported 00:09:36.547 Set Features (09h): Supported 00:09:36.547 Get Features (0Ah): Supported 00:09:36.547 Asynchronous Event Request (0Ch): Supported 00:09:36.547 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:36.547 Directive Send (19h): Supported 00:09:36.547 Directive Receive (1Ah): Supported 00:09:36.547 Virtualization Management (1Ch): Supported 00:09:36.547 Doorbell Buffer Config (7Ch): Supported 00:09:36.547 Format NVM (80h): Supported LBA-Change 00:09:36.547 I/O Commands 00:09:36.547 ------------ 00:09:36.547 Flush (00h): Supported LBA-Change 00:09:36.547 Write (01h): Supported LBA-Change 00:09:36.547 Read (02h): Supported 00:09:36.547 Compare (05h): Supported 00:09:36.547 Write Zeroes (08h): Supported LBA-Change 00:09:36.547 Dataset Management (09h): Supported LBA-Change 00:09:36.547 Unknown (0Ch): Supported 00:09:36.547 Unknown (12h): Supported 00:09:36.547 Copy (19h): Supported LBA-Change 00:09:36.547 Unknown (1Dh): Supported LBA-Change 00:09:36.547 00:09:36.547 Error Log 00:09:36.547 ========= 00:09:36.547 00:09:36.547 Arbitration 00:09:36.547 =========== 00:09:36.547 Arbitration Burst: no limit 00:09:36.547 00:09:36.547 Power Management 00:09:36.547 ================ 00:09:36.547 Number of Power States: 1 00:09:36.547 Current Power State: Power State #0 00:09:36.547 Power State #0: 00:09:36.547 Max Power: 25.00 W 00:09:36.547 Non-Operational State: Operational 00:09:36.547 Entry Latency: 16 microseconds 00:09:36.547 Exit Latency: 4 microseconds 00:09:36.547 Relative Read Throughput: 0 00:09:36.547 Relative Read Latency: 0 00:09:36.547 Relative Write Throughput: 0 00:09:36.547 Relative Write Latency: 0 00:09:36.547 Idle Power: Not Reported 00:09:36.547 Active Power: Not Reported 00:09:36.547 Non-Operational Permissive Mode: Not Supported 00:09:36.547 00:09:36.547 Health Information 00:09:36.547 ================== 00:09:36.547 Critical Warnings: 00:09:36.547 Available Spare Space: OK 00:09:36.547 Temperature: OK 00:09:36.547 Device Reliability: OK 00:09:36.547 Read Only: No 00:09:36.547 Volatile Memory Backup: OK 00:09:36.547 Current Temperature: 323 Kelvin (50 Celsius) 00:09:36.547 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:36.547 Available Spare: 0% 00:09:36.547 Available Spare Threshold: 0% 00:09:36.547 Life Percentage Used: 0% 00:09:36.547 Data Units Read: 1277 00:09:36.547 Data Units Written: 591 00:09:36.547 Host Read Commands: 62307 00:09:36.547 Host Write Commands: 30697 00:09:36.547 Controller Busy Time: 0 minutes 00:09:36.547 Power Cycles: 0 00:09:36.547 Power On Hours: 0 hours 00:09:36.547 Unsafe Shutdowns: 0 00:09:36.547 Unrecoverable Media Errors: 0 00:09:36.547 Lifetime Error Log Entries: 0 00:09:36.547 Warning Temperature Time: 0 minutes 00:09:36.547 Critical Temperature Time: 0 minutes 00:09:36.547 00:09:36.547 Number of Queues 00:09:36.547 ================ 00:09:36.547 Number of I/O Submission Queues: 64 00:09:36.547 Number of I/O Completion Queues: 64 00:09:36.547 00:09:36.547 ZNS Specific Controller Data 00:09:36.547 ============================ 00:09:36.547 Zone Append Size Limit: 0 00:09:36.547 00:09:36.547 00:09:36.547 Active Namespaces 00:09:36.547 ================= 00:09:36.547 Namespace ID:1 00:09:36.547 Error Recovery Timeout: Unlimited 00:09:36.547 Command Set Identifier: NVM (00h) 00:09:36.547 Deallocate: Supported 00:09:36.548 Deallocated/Unwritten Error: Supported 00:09:36.548 Deallocated Read Value: All 0x00 00:09:36.548 Deallocate in Write Zeroes: Not Supported 00:09:36.548 Deallocated Guard Field: 0xFFFF 00:09:36.548 Flush: Supported 00:09:36.548 Reservation: Not Supported 00:09:36.548 Namespace Sharing Capabilities: Multiple Controllers 00:09:36.548 Size (in LBAs): 262144 (1GiB) 00:09:36.548 Capacity (in LBAs): 262144 (1GiB) 00:09:36.548 Utilization (in LBAs): 262144 (1GiB) 00:09:36.548 Thin Provisioning: Not Supported 00:09:36.548 Per-NS Atomic Units: No 00:09:36.548 Maximum Single Source Range Length: 128 00:09:36.548 Maximum Copy Length: 128 00:09:36.548 Maximum Source Range Count: 128 00:09:36.548 NGUID/EUI64 Never Reused: No 00:09:36.548 Namespace Write Protected: No 00:09:36.548 Endurance group ID: 1 00:09:36.548 Number of LBA Formats: 8 00:09:36.548 Current LBA Format: LBA Format #04 00:09:36.548 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:36.548 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:36.548 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:36.548 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:36.548 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:36.548 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:36.548 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:36.548 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:36.548 00:09:36.548 Get Feature FDP: 00:09:36.548 ================ 00:09:36.548 Enabled: Yes 00:09:36.548 FDP configuration index: 0 00:09:36.548 00:09:36.548 FDP configurations log page 00:09:36.548 =========================== 00:09:36.548 Number of FDP configurations: 1 00:09:36.548 Version: 0 00:09:36.548 Size: 112 00:09:36.548 FDP Configuration Descriptor: 0 00:09:36.548 Descriptor Size: 96 00:09:36.548 Reclaim Group Identifier format: 2 00:09:36.548 FDP Volatile Write Cache: Not Present 00:09:36.548 FDP Configuration: Valid 00:09:36.548 Vendor Specific Size: 0 00:09:36.548 Number of Reclaim Groups: 2 00:09:36.548 Number of Recalim Unit Handles: 8 00:09:36.548 Max Placement Identifiers: 128 00:09:36.548 Number of Namespaces Suppprted: 256 00:09:36.548 Reclaim unit Nominal Size: 6000000 bytes 00:09:36.548 Estimated Reclaim Unit Time Limit: Not Reported 00:09:36.548 RUH Desc #000: RUH Type: Initially Isolated 00:09:36.548 RUH Desc #001: RUH Type: Initially Isolated 00:09:36.548 RUH Desc #002: RUH Type: Initially Isolated 00:09:36.548 RUH Desc #003: RUH Type: Initially Isolated 00:09:36.548 RUH Desc #004: RUH Type: Initially Isolated 00:09:36.548 RUH Desc #005: RUH Type: Initially Isolated 00:09:36.548 RUH Desc #006: RUH Type: Initially Isolated 00:09:36.548 RUH Desc #007: RUH Type: Initially Isolated 00:09:36.548 00:09:36.548 FDP reclaim unit handle usage log page 00:09:36.548 ====================================== 00:09:36.548 Number of Reclaim Unit Handles: 8 00:09:36.548 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:36.548 RUH Usage Desc #001: RUH Attributes: Unused 00:09:36.548 RUH Usage Desc #002: RUH Attributes: Unused 00:09:36.548 RUH Usage Desc #003: RUH Attributes: Unused 00:09:36.548 RUH Usage Desc #004: RUH Attributes: Unused 00:09:36.548 RUH Usage Desc #005: RUH Attributes: Unused 00:09:36.548 RUH Usage Desc #006: RUH Attributes: Unused 00:09:36.548 RUH Usage Desc #007: RUH Attributes: Unused 00:09:36.548 00:09:36.548 FDP statistics log page 00:09:36.548 ======================= 00:09:36.548 Host bytes with metadata written: 363941888 00:09:36.548 Media bytes with metadata written: 363999232 00:09:36.548 Media bytes erased: 0 00:09:36.548 00:09:36.548 FDP events log page 00:09:36.548 =================== 00:09:36.548 Number of FDP events: 0 00:09:36.548 00:09:36.548 00:09:36.548 real 0m1.225s 00:09:36.548 user 0m0.386s 00:09:36.548 sys 0m0.606s 00:09:36.548 20:16:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.548 20:16:51 -- common/autotest_common.sh@10 -- # set +x 00:09:36.548 ************************************ 00:09:36.548 END TEST nvme_identify 00:09:36.548 ************************************ 00:09:36.548 20:16:51 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:36.548 20:16:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:36.548 20:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:36.548 20:16:51 -- common/autotest_common.sh@10 -- # set +x 00:09:36.548 ************************************ 00:09:36.548 START TEST nvme_perf 00:09:36.548 ************************************ 00:09:36.548 20:16:51 -- common/autotest_common.sh@1104 -- # nvme_perf 00:09:36.548 20:16:51 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:37.939 Initializing NVMe Controllers 00:09:37.939 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:37.939 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:37.939 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:37.939 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:37.939 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:37.939 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:37.939 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:37.939 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:37.939 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:37.939 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:37.939 Initialization complete. Launching workers. 00:09:37.939 ======================================================== 00:09:37.939 Latency(us) 00:09:37.939 Device Information : IOPS MiB/s Average min max 00:09:37.939 PCIE (0000:00:07.0) NSID 1 from core 0: 8219.24 96.32 15569.85 12124.22 38323.17 00:09:37.939 PCIE (0000:00:09.0) NSID 1 from core 0: 8219.24 96.32 15559.64 11745.48 38107.41 00:09:37.939 PCIE (0000:00:06.0) NSID 1 from core 0: 8219.24 96.32 15541.77 11521.36 37865.44 00:09:37.939 PCIE (0000:00:08.0) NSID 1 from core 0: 8219.24 96.32 15526.06 11971.31 37263.91 00:09:37.939 PCIE (0000:00:08.0) NSID 2 from core 0: 8219.24 96.32 15509.18 10005.30 38794.91 00:09:37.939 PCIE (0000:00:08.0) NSID 3 from core 0: 8345.69 97.80 15258.60 9372.69 24040.26 00:09:37.939 ======================================================== 00:09:37.939 Total : 49441.89 579.40 15493.58 9372.69 38794.91 00:09:37.939 00:09:37.939 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:37.939 ================================================================================= 00:09:37.939 1.00000% : 12552.665us 00:09:37.939 10.00000% : 13510.498us 00:09:37.939 25.00000% : 14216.271us 00:09:37.939 50.00000% : 15022.868us 00:09:37.939 75.00000% : 16131.938us 00:09:37.939 90.00000% : 17845.957us 00:09:37.939 95.00000% : 19156.677us 00:09:37.939 98.00000% : 20366.572us 00:09:37.939 99.00000% : 36095.212us 00:09:37.939 99.50000% : 37305.108us 00:09:37.939 99.90000% : 38111.705us 00:09:37.939 99.99000% : 38515.003us 00:09:37.939 99.99900% : 38515.003us 00:09:37.939 99.99990% : 38515.003us 00:09:37.939 99.99999% : 38515.003us 00:09:37.939 00:09:37.939 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:37.939 ================================================================================= 00:09:37.939 1.00000% : 12552.665us 00:09:37.939 10.00000% : 13611.323us 00:09:37.939 25.00000% : 14216.271us 00:09:37.939 50.00000% : 15022.868us 00:09:37.939 75.00000% : 16031.114us 00:09:37.939 90.00000% : 17845.957us 00:09:37.939 95.00000% : 18854.203us 00:09:37.939 98.00000% : 20265.748us 00:09:37.939 99.00000% : 35893.563us 00:09:37.939 99.50000% : 37103.458us 00:09:37.939 99.90000% : 37910.055us 00:09:37.939 99.99000% : 38111.705us 00:09:37.939 99.99900% : 38111.705us 00:09:37.939 99.99990% : 38111.705us 00:09:37.939 99.99999% : 38111.705us 00:09:37.939 00:09:37.939 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:37.939 ================================================================================= 00:09:37.939 1.00000% : 12502.252us 00:09:37.939 10.00000% : 13510.498us 00:09:37.939 25.00000% : 14115.446us 00:09:37.939 50.00000% : 15123.692us 00:09:37.939 75.00000% : 16131.938us 00:09:37.939 90.00000% : 17745.132us 00:09:37.939 95.00000% : 18955.028us 00:09:37.939 98.00000% : 20870.695us 00:09:37.939 99.00000% : 35490.265us 00:09:37.939 99.50000% : 36700.160us 00:09:37.939 99.90000% : 37708.406us 00:09:37.939 99.99000% : 37910.055us 00:09:37.939 99.99900% : 37910.055us 00:09:37.939 99.99990% : 37910.055us 00:09:37.939 99.99999% : 37910.055us 00:09:37.939 00:09:37.939 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:37.939 ================================================================================= 00:09:37.939 1.00000% : 12502.252us 00:09:37.939 10.00000% : 13510.498us 00:09:37.939 25.00000% : 14115.446us 00:09:37.939 50.00000% : 14922.043us 00:09:37.939 75.00000% : 15930.289us 00:09:37.939 90.00000% : 17946.782us 00:09:37.939 95.00000% : 19257.502us 00:09:37.939 98.00000% : 21979.766us 00:09:37.939 99.00000% : 34885.317us 00:09:37.939 99.50000% : 36095.212us 00:09:37.939 99.90000% : 37103.458us 00:09:37.939 99.99000% : 37305.108us 00:09:37.939 99.99900% : 37305.108us 00:09:37.939 99.99990% : 37305.108us 00:09:37.939 99.99999% : 37305.108us 00:09:37.939 00:09:37.939 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:37.939 ================================================================================= 00:09:37.939 1.00000% : 11746.068us 00:09:37.939 10.00000% : 13409.674us 00:09:37.939 25.00000% : 14115.446us 00:09:37.939 50.00000% : 14922.043us 00:09:37.939 75.00000% : 15930.289us 00:09:37.939 90.00000% : 18350.080us 00:09:37.939 95.00000% : 19660.800us 00:09:37.939 98.00000% : 21475.643us 00:09:37.939 99.00000% : 36498.511us 00:09:37.939 99.50000% : 37708.406us 00:09:37.939 99.90000% : 38716.652us 00:09:37.939 99.99000% : 38918.302us 00:09:37.939 99.99900% : 38918.302us 00:09:37.939 99.99990% : 38918.302us 00:09:37.939 99.99999% : 38918.302us 00:09:37.939 00:09:37.939 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:37.939 ================================================================================= 00:09:37.939 1.00000% : 11141.120us 00:09:37.939 10.00000% : 13409.674us 00:09:37.939 25.00000% : 14014.622us 00:09:37.939 50.00000% : 14922.043us 00:09:37.939 75.00000% : 16131.938us 00:09:37.939 90.00000% : 17946.782us 00:09:37.939 95.00000% : 19559.975us 00:09:37.939 98.00000% : 20769.871us 00:09:37.939 99.00000% : 21778.117us 00:09:37.939 99.50000% : 22887.188us 00:09:37.939 99.90000% : 23895.434us 00:09:37.939 99.99000% : 24097.083us 00:09:37.939 99.99900% : 24097.083us 00:09:37.939 99.99990% : 24097.083us 00:09:37.939 99.99999% : 24097.083us 00:09:37.939 00:09:37.939 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:37.939 ============================================================================== 00:09:37.939 Range in us Cumulative IO count 00:09:37.939 12098.954 - 12149.366: 0.0240% ( 2) 00:09:37.939 12149.366 - 12199.778: 0.0841% ( 5) 00:09:37.939 12199.778 - 12250.191: 0.1683% ( 7) 00:09:37.939 12250.191 - 12300.603: 0.3005% ( 11) 00:09:37.939 12300.603 - 12351.015: 0.4087% ( 9) 00:09:37.939 12351.015 - 12401.428: 0.5769% ( 14) 00:09:37.939 12401.428 - 12451.840: 0.7212% ( 12) 00:09:37.939 12451.840 - 12502.252: 0.8774% ( 13) 00:09:37.939 12502.252 - 12552.665: 1.0457% ( 14) 00:09:37.939 12552.665 - 12603.077: 1.2019% ( 13) 00:09:37.939 12603.077 - 12653.489: 1.3702% ( 14) 00:09:37.939 12653.489 - 12703.902: 1.5024% ( 11) 00:09:37.939 12703.902 - 12754.314: 1.6707% ( 14) 00:09:37.939 12754.314 - 12804.726: 1.9111% ( 20) 00:09:37.939 12804.726 - 12855.138: 2.1755% ( 22) 00:09:37.939 12855.138 - 12905.551: 2.5240% ( 29) 00:09:37.939 12905.551 - 13006.375: 3.5697% ( 87) 00:09:37.939 13006.375 - 13107.200: 4.5913% ( 85) 00:09:37.939 13107.200 - 13208.025: 5.5529% ( 80) 00:09:37.939 13208.025 - 13308.849: 6.7067% ( 96) 00:09:37.939 13308.849 - 13409.674: 8.2692% ( 130) 00:09:37.939 13409.674 - 13510.498: 10.0120% ( 145) 00:09:37.939 13510.498 - 13611.323: 11.8630% ( 154) 00:09:37.939 13611.323 - 13712.148: 13.7260% ( 155) 00:09:37.939 13712.148 - 13812.972: 15.8293% ( 175) 00:09:37.939 13812.972 - 13913.797: 18.3173% ( 207) 00:09:37.939 13913.797 - 14014.622: 20.9736% ( 221) 00:09:37.939 14014.622 - 14115.446: 23.7740% ( 233) 00:09:37.939 14115.446 - 14216.271: 26.8149% ( 253) 00:09:37.939 14216.271 - 14317.095: 29.9760% ( 263) 00:09:37.939 14317.095 - 14417.920: 32.8606% ( 240) 00:09:37.939 14417.920 - 14518.745: 35.9014% ( 253) 00:09:37.939 14518.745 - 14619.569: 38.9183% ( 251) 00:09:37.939 14619.569 - 14720.394: 41.9351% ( 251) 00:09:37.939 14720.394 - 14821.218: 44.9399% ( 250) 00:09:37.939 14821.218 - 14922.043: 48.0409% ( 258) 00:09:37.939 14922.043 - 15022.868: 51.1178% ( 256) 00:09:37.939 15022.868 - 15123.692: 54.0385% ( 243) 00:09:37.939 15123.692 - 15224.517: 56.9591% ( 243) 00:09:37.940 15224.517 - 15325.342: 59.6635% ( 225) 00:09:37.940 15325.342 - 15426.166: 62.3438% ( 223) 00:09:37.940 15426.166 - 15526.991: 64.8678% ( 210) 00:09:37.940 15526.991 - 15627.815: 67.2596% ( 199) 00:09:37.940 15627.815 - 15728.640: 69.2788% ( 168) 00:09:37.940 15728.640 - 15829.465: 71.2380% ( 163) 00:09:37.940 15829.465 - 15930.289: 73.0889% ( 154) 00:09:37.940 15930.289 - 16031.114: 74.9399% ( 154) 00:09:37.940 16031.114 - 16131.938: 76.6587% ( 143) 00:09:37.940 16131.938 - 16232.763: 78.0769% ( 118) 00:09:37.940 16232.763 - 16333.588: 79.3389% ( 105) 00:09:37.940 16333.588 - 16434.412: 80.4688% ( 94) 00:09:37.940 16434.412 - 16535.237: 81.6466% ( 98) 00:09:37.940 16535.237 - 16636.062: 82.6562% ( 84) 00:09:37.940 16636.062 - 16736.886: 83.6058% ( 79) 00:09:37.940 16736.886 - 16837.711: 84.4832% ( 73) 00:09:37.940 16837.711 - 16938.535: 85.1803% ( 58) 00:09:37.940 16938.535 - 17039.360: 85.8774% ( 58) 00:09:37.940 17039.360 - 17140.185: 86.5625% ( 57) 00:09:37.940 17140.185 - 17241.009: 87.2476% ( 57) 00:09:37.940 17241.009 - 17341.834: 87.8846% ( 53) 00:09:37.940 17341.834 - 17442.658: 88.3654% ( 40) 00:09:37.940 17442.658 - 17543.483: 88.8341% ( 39) 00:09:37.940 17543.483 - 17644.308: 89.3029% ( 39) 00:09:37.940 17644.308 - 17745.132: 89.6755% ( 31) 00:09:37.940 17745.132 - 17845.957: 90.0721% ( 33) 00:09:37.940 17845.957 - 17946.782: 90.4928% ( 35) 00:09:37.940 17946.782 - 18047.606: 90.8654% ( 31) 00:09:37.940 18047.606 - 18148.431: 91.3462% ( 40) 00:09:37.940 18148.431 - 18249.255: 91.8870% ( 45) 00:09:37.940 18249.255 - 18350.080: 92.3558% ( 39) 00:09:37.940 18350.080 - 18450.905: 92.7524% ( 33) 00:09:37.940 18450.905 - 18551.729: 93.1130% ( 30) 00:09:37.940 18551.729 - 18652.554: 93.4976% ( 32) 00:09:37.940 18652.554 - 18753.378: 93.8582% ( 30) 00:09:37.940 18753.378 - 18854.203: 94.2067% ( 29) 00:09:37.940 18854.203 - 18955.028: 94.5433% ( 28) 00:09:37.940 18955.028 - 19055.852: 94.8438% ( 25) 00:09:37.940 19055.852 - 19156.677: 95.1562% ( 26) 00:09:37.940 19156.677 - 19257.502: 95.5048% ( 29) 00:09:37.940 19257.502 - 19358.326: 95.8413% ( 28) 00:09:37.940 19358.326 - 19459.151: 96.1899% ( 29) 00:09:37.940 19459.151 - 19559.975: 96.5024% ( 26) 00:09:37.940 19559.975 - 19660.800: 96.7909% ( 24) 00:09:37.940 19660.800 - 19761.625: 97.0673% ( 23) 00:09:37.940 19761.625 - 19862.449: 97.2476% ( 15) 00:09:37.940 19862.449 - 19963.274: 97.4279% ( 15) 00:09:37.940 19963.274 - 20064.098: 97.6082% ( 15) 00:09:37.940 20064.098 - 20164.923: 97.7885% ( 15) 00:09:37.940 20164.923 - 20265.748: 97.9567% ( 14) 00:09:37.940 20265.748 - 20366.572: 98.0529% ( 8) 00:09:37.940 20366.572 - 20467.397: 98.1731% ( 10) 00:09:37.940 20467.397 - 20568.222: 98.2812% ( 9) 00:09:37.940 20568.222 - 20669.046: 98.3774% ( 8) 00:09:37.940 20669.046 - 20769.871: 98.4495% ( 6) 00:09:37.940 20769.871 - 20870.695: 98.4615% ( 1) 00:09:37.940 34885.317 - 35086.966: 98.5577% ( 8) 00:09:37.940 35086.966 - 35288.615: 98.6538% ( 8) 00:09:37.940 35288.615 - 35490.265: 98.7380% ( 7) 00:09:37.940 35490.265 - 35691.914: 98.8221% ( 7) 00:09:37.940 35691.914 - 35893.563: 98.9062% ( 7) 00:09:37.940 35893.563 - 36095.212: 99.0024% ( 8) 00:09:37.940 36095.212 - 36296.862: 99.0986% ( 8) 00:09:37.940 36296.862 - 36498.511: 99.1827% ( 7) 00:09:37.940 36498.511 - 36700.160: 99.2668% ( 7) 00:09:37.940 36700.160 - 36901.809: 99.3630% ( 8) 00:09:37.940 36901.809 - 37103.458: 99.4471% ( 7) 00:09:37.940 37103.458 - 37305.108: 99.5433% ( 8) 00:09:37.940 37305.108 - 37506.757: 99.6274% ( 7) 00:09:37.940 37506.757 - 37708.406: 99.7236% ( 8) 00:09:37.940 37708.406 - 37910.055: 99.8197% ( 8) 00:09:37.940 37910.055 - 38111.705: 99.9159% ( 8) 00:09:37.940 38111.705 - 38313.354: 99.9880% ( 6) 00:09:37.940 38313.354 - 38515.003: 100.0000% ( 1) 00:09:37.940 00:09:37.940 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:37.940 ============================================================================== 00:09:37.940 Range in us Cumulative IO count 00:09:37.940 11695.655 - 11746.068: 0.0120% ( 1) 00:09:37.940 11746.068 - 11796.480: 0.0481% ( 3) 00:09:37.940 11796.480 - 11846.892: 0.0841% ( 3) 00:09:37.940 11846.892 - 11897.305: 0.1202% ( 3) 00:09:37.940 11897.305 - 11947.717: 0.1562% ( 3) 00:09:37.940 11947.717 - 11998.129: 0.1923% ( 3) 00:09:37.940 11998.129 - 12048.542: 0.2284% ( 3) 00:09:37.940 12048.542 - 12098.954: 0.2644% ( 3) 00:09:37.940 12098.954 - 12149.366: 0.3005% ( 3) 00:09:37.940 12149.366 - 12199.778: 0.3486% ( 4) 00:09:37.940 12199.778 - 12250.191: 0.4207% ( 6) 00:09:37.940 12250.191 - 12300.603: 0.5048% ( 7) 00:09:37.940 12300.603 - 12351.015: 0.5769% ( 6) 00:09:37.940 12351.015 - 12401.428: 0.6851% ( 9) 00:09:37.940 12401.428 - 12451.840: 0.8053% ( 10) 00:09:37.940 12451.840 - 12502.252: 0.9255% ( 10) 00:09:37.940 12502.252 - 12552.665: 1.0337% ( 9) 00:09:37.940 12552.665 - 12603.077: 1.1418% ( 9) 00:09:37.940 12603.077 - 12653.489: 1.2620% ( 10) 00:09:37.940 12653.489 - 12703.902: 1.4062% ( 12) 00:09:37.940 12703.902 - 12754.314: 1.5745% ( 14) 00:09:37.940 12754.314 - 12804.726: 1.8029% ( 19) 00:09:37.940 12804.726 - 12855.138: 2.1394% ( 28) 00:09:37.940 12855.138 - 12905.551: 2.3798% ( 20) 00:09:37.940 12905.551 - 13006.375: 2.9928% ( 51) 00:09:37.940 13006.375 - 13107.200: 3.6779% ( 57) 00:09:37.940 13107.200 - 13208.025: 4.7476% ( 89) 00:09:37.940 13208.025 - 13308.849: 5.9014% ( 96) 00:09:37.940 13308.849 - 13409.674: 7.2716% ( 114) 00:09:37.940 13409.674 - 13510.498: 8.9183% ( 137) 00:09:37.940 13510.498 - 13611.323: 10.7572% ( 153) 00:09:37.940 13611.323 - 13712.148: 12.7885% ( 169) 00:09:37.940 13712.148 - 13812.972: 15.1803% ( 199) 00:09:37.940 13812.972 - 13913.797: 17.8245% ( 220) 00:09:37.940 13913.797 - 14014.622: 20.3726% ( 212) 00:09:37.940 14014.622 - 14115.446: 23.1370% ( 230) 00:09:37.940 14115.446 - 14216.271: 26.0697% ( 244) 00:09:37.940 14216.271 - 14317.095: 29.1106% ( 253) 00:09:37.940 14317.095 - 14417.920: 32.1514% ( 253) 00:09:37.940 14417.920 - 14518.745: 35.3365% ( 265) 00:09:37.940 14518.745 - 14619.569: 38.4736% ( 261) 00:09:37.940 14619.569 - 14720.394: 41.6707% ( 266) 00:09:37.940 14720.394 - 14821.218: 44.8077% ( 261) 00:09:37.940 14821.218 - 14922.043: 47.9207% ( 259) 00:09:37.940 14922.043 - 15022.868: 51.0457% ( 260) 00:09:37.940 15022.868 - 15123.692: 54.1226% ( 256) 00:09:37.940 15123.692 - 15224.517: 57.2596% ( 261) 00:09:37.940 15224.517 - 15325.342: 60.1683% ( 242) 00:09:37.940 15325.342 - 15426.166: 62.8966% ( 227) 00:09:37.940 15426.166 - 15526.991: 65.5168% ( 218) 00:09:37.940 15526.991 - 15627.815: 67.7524% ( 186) 00:09:37.940 15627.815 - 15728.640: 69.9519% ( 183) 00:09:37.940 15728.640 - 15829.465: 71.8510% ( 158) 00:09:37.940 15829.465 - 15930.289: 73.6418% ( 149) 00:09:37.940 15930.289 - 16031.114: 75.2524% ( 134) 00:09:37.940 16031.114 - 16131.938: 76.6947% ( 120) 00:09:37.940 16131.938 - 16232.763: 77.9808% ( 107) 00:09:37.940 16232.763 - 16333.588: 79.2668% ( 107) 00:09:37.940 16333.588 - 16434.412: 80.4087% ( 95) 00:09:37.940 16434.412 - 16535.237: 81.4784% ( 89) 00:09:37.940 16535.237 - 16636.062: 82.3918% ( 76) 00:09:37.940 16636.062 - 16736.886: 83.1490% ( 63) 00:09:37.940 16736.886 - 16837.711: 83.8702% ( 60) 00:09:37.940 16837.711 - 16938.535: 84.5673% ( 58) 00:09:37.940 16938.535 - 17039.360: 85.3125% ( 62) 00:09:37.940 17039.360 - 17140.185: 86.0938% ( 65) 00:09:37.940 17140.185 - 17241.009: 86.8510% ( 63) 00:09:37.940 17241.009 - 17341.834: 87.5721% ( 60) 00:09:37.940 17341.834 - 17442.658: 88.2812% ( 59) 00:09:37.940 17442.658 - 17543.483: 88.8582% ( 48) 00:09:37.940 17543.483 - 17644.308: 89.3510% ( 41) 00:09:37.940 17644.308 - 17745.132: 89.8678% ( 43) 00:09:37.940 17745.132 - 17845.957: 90.4327% ( 47) 00:09:37.940 17845.957 - 17946.782: 90.9976% ( 47) 00:09:37.940 17946.782 - 18047.606: 91.5024% ( 42) 00:09:37.940 18047.606 - 18148.431: 92.0312% ( 44) 00:09:37.940 18148.431 - 18249.255: 92.5240% ( 41) 00:09:37.940 18249.255 - 18350.080: 92.9808% ( 38) 00:09:37.940 18350.080 - 18450.905: 93.4495% ( 39) 00:09:37.940 18450.905 - 18551.729: 93.9183% ( 39) 00:09:37.940 18551.729 - 18652.554: 94.3750% ( 38) 00:09:37.940 18652.554 - 18753.378: 94.8077% ( 36) 00:09:37.940 18753.378 - 18854.203: 95.2284% ( 35) 00:09:37.940 18854.203 - 18955.028: 95.5288% ( 25) 00:09:37.940 18955.028 - 19055.852: 95.7812% ( 21) 00:09:37.940 19055.852 - 19156.677: 96.0938% ( 26) 00:09:37.940 19156.677 - 19257.502: 96.3822% ( 24) 00:09:37.940 19257.502 - 19358.326: 96.6707% ( 24) 00:09:37.940 19358.326 - 19459.151: 96.9591% ( 24) 00:09:37.940 19459.151 - 19559.975: 97.2236% ( 22) 00:09:37.940 19559.975 - 19660.800: 97.3798% ( 13) 00:09:37.940 19660.800 - 19761.625: 97.5120% ( 11) 00:09:37.940 19761.625 - 19862.449: 97.6562% ( 12) 00:09:37.940 19862.449 - 19963.274: 97.7644% ( 9) 00:09:37.940 19963.274 - 20064.098: 97.8966% ( 11) 00:09:37.940 20064.098 - 20164.923: 97.9928% ( 8) 00:09:37.940 20164.923 - 20265.748: 98.1130% ( 10) 00:09:37.940 20265.748 - 20366.572: 98.2332% ( 10) 00:09:37.940 20366.572 - 20467.397: 98.3534% ( 10) 00:09:37.940 20467.397 - 20568.222: 98.4495% ( 8) 00:09:37.940 20568.222 - 20669.046: 98.4615% ( 1) 00:09:37.940 34280.369 - 34482.018: 98.4736% ( 1) 00:09:37.940 34482.018 - 34683.668: 98.5697% ( 8) 00:09:37.940 34683.668 - 34885.317: 98.6418% ( 6) 00:09:37.940 34885.317 - 35086.966: 98.7380% ( 8) 00:09:37.940 35086.966 - 35288.615: 98.8101% ( 6) 00:09:37.940 35288.615 - 35490.265: 98.8942% ( 7) 00:09:37.940 35490.265 - 35691.914: 98.9784% ( 7) 00:09:37.940 35691.914 - 35893.563: 99.0625% ( 7) 00:09:37.940 35893.563 - 36095.212: 99.1346% ( 6) 00:09:37.940 36095.212 - 36296.862: 99.2308% ( 8) 00:09:37.940 36296.862 - 36498.511: 99.3149% ( 7) 00:09:37.940 36498.511 - 36700.160: 99.3990% ( 7) 00:09:37.941 36700.160 - 36901.809: 99.4832% ( 7) 00:09:37.941 36901.809 - 37103.458: 99.5673% ( 7) 00:09:37.941 37103.458 - 37305.108: 99.6514% ( 7) 00:09:37.941 37305.108 - 37506.757: 99.7356% ( 7) 00:09:37.941 37506.757 - 37708.406: 99.8197% ( 7) 00:09:37.941 37708.406 - 37910.055: 99.9159% ( 8) 00:09:37.941 37910.055 - 38111.705: 100.0000% ( 7) 00:09:37.941 00:09:37.941 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:37.941 ============================================================================== 00:09:37.941 Range in us Cumulative IO count 00:09:37.941 11494.006 - 11544.418: 0.0361% ( 3) 00:09:37.941 11544.418 - 11594.831: 0.0481% ( 1) 00:09:37.941 11594.831 - 11645.243: 0.0962% ( 4) 00:09:37.941 11645.243 - 11695.655: 0.1082% ( 1) 00:09:37.941 11695.655 - 11746.068: 0.1442% ( 3) 00:09:37.941 11746.068 - 11796.480: 0.1803% ( 3) 00:09:37.941 11796.480 - 11846.892: 0.2163% ( 3) 00:09:37.941 11846.892 - 11897.305: 0.2404% ( 2) 00:09:37.941 11897.305 - 11947.717: 0.2764% ( 3) 00:09:37.941 11947.717 - 11998.129: 0.3005% ( 2) 00:09:37.941 11998.129 - 12048.542: 0.3846% ( 7) 00:09:37.941 12048.542 - 12098.954: 0.4207% ( 3) 00:09:37.941 12098.954 - 12149.366: 0.4928% ( 6) 00:09:37.941 12149.366 - 12199.778: 0.5409% ( 4) 00:09:37.941 12199.778 - 12250.191: 0.6370% ( 8) 00:09:37.941 12250.191 - 12300.603: 0.6731% ( 3) 00:09:37.941 12300.603 - 12351.015: 0.6971% ( 2) 00:09:37.941 12351.015 - 12401.428: 0.8173% ( 10) 00:09:37.941 12401.428 - 12451.840: 0.9135% ( 8) 00:09:37.941 12451.840 - 12502.252: 1.0938% ( 15) 00:09:37.941 12502.252 - 12552.665: 1.3702% ( 23) 00:09:37.941 12552.665 - 12603.077: 1.6226% ( 21) 00:09:37.941 12603.077 - 12653.489: 1.7909% ( 14) 00:09:37.941 12653.489 - 12703.902: 2.0192% ( 19) 00:09:37.941 12703.902 - 12754.314: 2.1755% ( 13) 00:09:37.941 12754.314 - 12804.726: 2.3438% ( 14) 00:09:37.941 12804.726 - 12855.138: 2.6082% ( 22) 00:09:37.941 12855.138 - 12905.551: 2.9567% ( 29) 00:09:37.941 12905.551 - 13006.375: 3.8822% ( 77) 00:09:37.941 13006.375 - 13107.200: 5.1442% ( 105) 00:09:37.941 13107.200 - 13208.025: 6.4784% ( 111) 00:09:37.941 13208.025 - 13308.849: 8.0168% ( 128) 00:09:37.941 13308.849 - 13409.674: 9.7837% ( 147) 00:09:37.941 13409.674 - 13510.498: 11.6346% ( 154) 00:09:37.941 13510.498 - 13611.323: 13.8582% ( 185) 00:09:37.941 13611.323 - 13712.148: 15.9135% ( 171) 00:09:37.941 13712.148 - 13812.972: 18.2692% ( 196) 00:09:37.941 13812.972 - 13913.797: 20.1442% ( 156) 00:09:37.941 13913.797 - 14014.622: 22.8966% ( 229) 00:09:37.941 14014.622 - 14115.446: 25.4928% ( 216) 00:09:37.941 14115.446 - 14216.271: 28.1130% ( 218) 00:09:37.941 14216.271 - 14317.095: 30.6731% ( 213) 00:09:37.941 14317.095 - 14417.920: 33.1370% ( 205) 00:09:37.941 14417.920 - 14518.745: 35.9976% ( 238) 00:09:37.941 14518.745 - 14619.569: 38.6659% ( 222) 00:09:37.941 14619.569 - 14720.394: 41.4904% ( 235) 00:09:37.941 14720.394 - 14821.218: 44.3750% ( 240) 00:09:37.941 14821.218 - 14922.043: 47.5962% ( 268) 00:09:37.941 14922.043 - 15022.868: 49.9880% ( 199) 00:09:37.941 15022.868 - 15123.692: 52.8245% ( 236) 00:09:37.941 15123.692 - 15224.517: 55.6611% ( 236) 00:09:37.941 15224.517 - 15325.342: 58.4255% ( 230) 00:09:37.941 15325.342 - 15426.166: 61.2740% ( 237) 00:09:37.941 15426.166 - 15526.991: 63.7139% ( 203) 00:09:37.941 15526.991 - 15627.815: 66.0577% ( 195) 00:09:37.941 15627.815 - 15728.640: 68.4375% ( 198) 00:09:37.941 15728.640 - 15829.465: 70.5769% ( 178) 00:09:37.941 15829.465 - 15930.289: 72.4639% ( 157) 00:09:37.941 15930.289 - 16031.114: 74.0745% ( 134) 00:09:37.941 16031.114 - 16131.938: 75.7091% ( 136) 00:09:37.941 16131.938 - 16232.763: 77.1154% ( 117) 00:09:37.941 16232.763 - 16333.588: 78.6418% ( 127) 00:09:37.941 16333.588 - 16434.412: 79.8558% ( 101) 00:09:37.941 16434.412 - 16535.237: 81.2019% ( 112) 00:09:37.941 16535.237 - 16636.062: 82.2957% ( 91) 00:09:37.941 16636.062 - 16736.886: 83.5096% ( 101) 00:09:37.941 16736.886 - 16837.711: 84.3750% ( 72) 00:09:37.941 16837.711 - 16938.535: 85.3125% ( 78) 00:09:37.941 16938.535 - 17039.360: 86.1418% ( 69) 00:09:37.941 17039.360 - 17140.185: 86.9231% ( 65) 00:09:37.941 17140.185 - 17241.009: 87.7043% ( 65) 00:09:37.941 17241.009 - 17341.834: 88.3774% ( 56) 00:09:37.941 17341.834 - 17442.658: 88.8101% ( 36) 00:09:37.941 17442.658 - 17543.483: 89.3750% ( 47) 00:09:37.941 17543.483 - 17644.308: 89.9639% ( 49) 00:09:37.941 17644.308 - 17745.132: 90.4688% ( 42) 00:09:37.941 17745.132 - 17845.957: 90.9135% ( 37) 00:09:37.941 17845.957 - 17946.782: 91.4423% ( 44) 00:09:37.941 17946.782 - 18047.606: 91.9471% ( 42) 00:09:37.941 18047.606 - 18148.431: 92.3678% ( 35) 00:09:37.941 18148.431 - 18249.255: 92.8966% ( 44) 00:09:37.941 18249.255 - 18350.080: 93.2692% ( 31) 00:09:37.941 18350.080 - 18450.905: 93.5817% ( 26) 00:09:37.941 18450.905 - 18551.729: 93.8942% ( 26) 00:09:37.941 18551.729 - 18652.554: 94.2188% ( 27) 00:09:37.941 18652.554 - 18753.378: 94.5433% ( 27) 00:09:37.941 18753.378 - 18854.203: 94.8558% ( 26) 00:09:37.941 18854.203 - 18955.028: 95.1202% ( 22) 00:09:37.941 18955.028 - 19055.852: 95.3726% ( 21) 00:09:37.941 19055.852 - 19156.677: 95.5889% ( 18) 00:09:37.941 19156.677 - 19257.502: 95.7933% ( 17) 00:09:37.941 19257.502 - 19358.326: 95.9976% ( 17) 00:09:37.941 19358.326 - 19459.151: 96.2740% ( 23) 00:09:37.941 19459.151 - 19559.975: 96.4663% ( 16) 00:09:37.941 19559.975 - 19660.800: 96.6346% ( 14) 00:09:37.941 19660.800 - 19761.625: 96.8149% ( 15) 00:09:37.941 19761.625 - 19862.449: 96.9111% ( 8) 00:09:37.941 19862.449 - 19963.274: 97.0553% ( 12) 00:09:37.941 19963.274 - 20064.098: 97.1875% ( 11) 00:09:37.941 20064.098 - 20164.923: 97.2957% ( 9) 00:09:37.941 20164.923 - 20265.748: 97.4038% ( 9) 00:09:37.941 20265.748 - 20366.572: 97.4399% ( 3) 00:09:37.941 20366.572 - 20467.397: 97.6803% ( 20) 00:09:37.941 20467.397 - 20568.222: 97.7764% ( 8) 00:09:37.941 20568.222 - 20669.046: 97.8726% ( 8) 00:09:37.941 20669.046 - 20769.871: 97.9688% ( 8) 00:09:37.941 20769.871 - 20870.695: 98.0889% ( 10) 00:09:37.941 20870.695 - 20971.520: 98.1611% ( 6) 00:09:37.941 20971.520 - 21072.345: 98.2452% ( 7) 00:09:37.941 21072.345 - 21173.169: 98.3173% ( 6) 00:09:37.941 21173.169 - 21273.994: 98.3413% ( 2) 00:09:37.941 21273.994 - 21374.818: 98.4375% ( 8) 00:09:37.941 21374.818 - 21475.643: 98.4615% ( 2) 00:09:37.941 33877.071 - 34078.720: 98.4976% ( 3) 00:09:37.941 34078.720 - 34280.369: 98.5457% ( 4) 00:09:37.941 34280.369 - 34482.018: 98.6899% ( 12) 00:09:37.941 34482.018 - 34683.668: 98.7380% ( 4) 00:09:37.941 34683.668 - 34885.317: 98.8221% ( 7) 00:09:37.941 34885.317 - 35086.966: 98.8702% ( 4) 00:09:37.941 35086.966 - 35288.615: 98.9784% ( 9) 00:09:37.941 35288.615 - 35490.265: 99.0505% ( 6) 00:09:37.941 35490.265 - 35691.914: 99.1466% ( 8) 00:09:37.941 35691.914 - 35893.563: 99.2308% ( 7) 00:09:37.941 35893.563 - 36095.212: 99.3029% ( 6) 00:09:37.941 36095.212 - 36296.862: 99.3750% ( 6) 00:09:37.941 36296.862 - 36498.511: 99.4591% ( 7) 00:09:37.941 36498.511 - 36700.160: 99.5312% ( 6) 00:09:37.941 36700.160 - 36901.809: 99.6154% ( 7) 00:09:37.941 36901.809 - 37103.458: 99.6995% ( 7) 00:09:37.941 37103.458 - 37305.108: 99.7837% ( 7) 00:09:37.941 37305.108 - 37506.757: 99.8798% ( 8) 00:09:37.941 37506.757 - 37708.406: 99.9399% ( 5) 00:09:37.941 37708.406 - 37910.055: 100.0000% ( 5) 00:09:37.941 00:09:37.941 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:37.941 ============================================================================== 00:09:37.941 Range in us Cumulative IO count 00:09:37.941 11947.717 - 11998.129: 0.0240% ( 2) 00:09:37.941 11998.129 - 12048.542: 0.0601% ( 3) 00:09:37.941 12048.542 - 12098.954: 0.0962% ( 3) 00:09:37.941 12098.954 - 12149.366: 0.1322% ( 3) 00:09:37.941 12149.366 - 12199.778: 0.1803% ( 4) 00:09:37.941 12199.778 - 12250.191: 0.2885% ( 9) 00:09:37.941 12250.191 - 12300.603: 0.4087% ( 10) 00:09:37.941 12300.603 - 12351.015: 0.5168% ( 9) 00:09:37.941 12351.015 - 12401.428: 0.6490% ( 11) 00:09:37.941 12401.428 - 12451.840: 0.8293% ( 15) 00:09:37.941 12451.840 - 12502.252: 1.0096% ( 15) 00:09:37.941 12502.252 - 12552.665: 1.2260% ( 18) 00:09:37.941 12552.665 - 12603.077: 1.4663% ( 20) 00:09:37.941 12603.077 - 12653.489: 1.6587% ( 16) 00:09:37.941 12653.489 - 12703.902: 1.8630% ( 17) 00:09:37.941 12703.902 - 12754.314: 2.0913% ( 19) 00:09:37.941 12754.314 - 12804.726: 2.3317% ( 20) 00:09:37.941 12804.726 - 12855.138: 2.5962% ( 22) 00:09:37.941 12855.138 - 12905.551: 2.8726% ( 23) 00:09:37.941 12905.551 - 13006.375: 3.3894% ( 43) 00:09:37.941 13006.375 - 13107.200: 4.2188% ( 69) 00:09:37.941 13107.200 - 13208.025: 5.4447% ( 102) 00:09:37.941 13208.025 - 13308.849: 7.0072% ( 130) 00:09:37.941 13308.849 - 13409.674: 8.7380% ( 144) 00:09:37.941 13409.674 - 13510.498: 10.6611% ( 160) 00:09:37.941 13510.498 - 13611.323: 12.6562% ( 166) 00:09:37.941 13611.323 - 13712.148: 14.9159% ( 188) 00:09:37.941 13712.148 - 13812.972: 17.5120% ( 216) 00:09:37.941 13812.972 - 13913.797: 20.2404% ( 227) 00:09:37.941 13913.797 - 14014.622: 22.9327% ( 224) 00:09:37.941 14014.622 - 14115.446: 25.7692% ( 236) 00:09:37.941 14115.446 - 14216.271: 28.6779% ( 242) 00:09:37.941 14216.271 - 14317.095: 31.8389% ( 263) 00:09:37.941 14317.095 - 14417.920: 34.9880% ( 262) 00:09:37.941 14417.920 - 14518.745: 38.1731% ( 265) 00:09:37.941 14518.745 - 14619.569: 41.4303% ( 271) 00:09:37.941 14619.569 - 14720.394: 44.5913% ( 263) 00:09:37.941 14720.394 - 14821.218: 47.7764% ( 265) 00:09:37.941 14821.218 - 14922.043: 50.8774% ( 258) 00:09:37.941 14922.043 - 15022.868: 53.9062% ( 252) 00:09:37.941 15022.868 - 15123.692: 57.0433% ( 261) 00:09:37.941 15123.692 - 15224.517: 60.0721% ( 252) 00:09:37.941 15224.517 - 15325.342: 62.9447% ( 239) 00:09:37.941 15325.342 - 15426.166: 65.5889% ( 220) 00:09:37.941 15426.166 - 15526.991: 67.8966% ( 192) 00:09:37.941 15526.991 - 15627.815: 70.0481% ( 179) 00:09:37.941 15627.815 - 15728.640: 71.9591% ( 159) 00:09:37.941 15728.640 - 15829.465: 73.6058% ( 137) 00:09:37.942 15829.465 - 15930.289: 75.0481% ( 120) 00:09:37.942 15930.289 - 16031.114: 76.4062% ( 113) 00:09:37.942 16031.114 - 16131.938: 77.6202% ( 101) 00:09:37.942 16131.938 - 16232.763: 78.7861% ( 97) 00:09:37.942 16232.763 - 16333.588: 79.7837% ( 83) 00:09:37.942 16333.588 - 16434.412: 80.7572% ( 81) 00:09:37.942 16434.412 - 16535.237: 81.5745% ( 68) 00:09:37.942 16535.237 - 16636.062: 82.3317% ( 63) 00:09:37.942 16636.062 - 16736.886: 83.0288% ( 58) 00:09:37.942 16736.886 - 16837.711: 83.7380% ( 59) 00:09:37.942 16837.711 - 16938.535: 84.4712% ( 61) 00:09:37.942 16938.535 - 17039.360: 85.1683% ( 58) 00:09:37.942 17039.360 - 17140.185: 85.8053% ( 53) 00:09:37.942 17140.185 - 17241.009: 86.4183% ( 51) 00:09:37.942 17241.009 - 17341.834: 86.9471% ( 44) 00:09:37.942 17341.834 - 17442.658: 87.4760% ( 44) 00:09:37.942 17442.658 - 17543.483: 88.0409% ( 47) 00:09:37.942 17543.483 - 17644.308: 88.5457% ( 42) 00:09:37.942 17644.308 - 17745.132: 89.0745% ( 44) 00:09:37.942 17745.132 - 17845.957: 89.6755% ( 50) 00:09:37.942 17845.957 - 17946.782: 90.3005% ( 52) 00:09:37.942 17946.782 - 18047.606: 90.8774% ( 48) 00:09:37.942 18047.606 - 18148.431: 91.4303% ( 46) 00:09:37.942 18148.431 - 18249.255: 91.9231% ( 41) 00:09:37.942 18249.255 - 18350.080: 92.3558% ( 36) 00:09:37.942 18350.080 - 18450.905: 92.7644% ( 34) 00:09:37.942 18450.905 - 18551.729: 93.0529% ( 24) 00:09:37.942 18551.729 - 18652.554: 93.3774% ( 27) 00:09:37.942 18652.554 - 18753.378: 93.7139% ( 28) 00:09:37.942 18753.378 - 18854.203: 94.0024% ( 24) 00:09:37.942 18854.203 - 18955.028: 94.2788% ( 23) 00:09:37.942 18955.028 - 19055.852: 94.5673% ( 24) 00:09:37.942 19055.852 - 19156.677: 94.8317% ( 22) 00:09:37.942 19156.677 - 19257.502: 95.1202% ( 24) 00:09:37.942 19257.502 - 19358.326: 95.3365% ( 18) 00:09:37.942 19358.326 - 19459.151: 95.5409% ( 17) 00:09:37.942 19459.151 - 19559.975: 95.7091% ( 14) 00:09:37.942 19559.975 - 19660.800: 95.8413% ( 11) 00:09:37.942 19660.800 - 19761.625: 95.9736% ( 11) 00:09:37.942 19761.625 - 19862.449: 96.1058% ( 11) 00:09:37.942 19862.449 - 19963.274: 96.2380% ( 11) 00:09:37.942 19963.274 - 20064.098: 96.3822% ( 12) 00:09:37.942 20064.098 - 20164.923: 96.5024% ( 10) 00:09:37.942 20164.923 - 20265.748: 96.6106% ( 9) 00:09:37.942 20265.748 - 20366.572: 96.7308% ( 10) 00:09:37.942 20366.572 - 20467.397: 96.8510% ( 10) 00:09:37.942 20467.397 - 20568.222: 96.9591% ( 9) 00:09:37.942 20568.222 - 20669.046: 97.0553% ( 8) 00:09:37.942 20669.046 - 20769.871: 97.1755% ( 10) 00:09:37.942 20769.871 - 20870.695: 97.2716% ( 8) 00:09:37.942 20870.695 - 20971.520: 97.3558% ( 7) 00:09:37.942 20971.520 - 21072.345: 97.4279% ( 6) 00:09:37.942 21072.345 - 21173.169: 97.5000% ( 6) 00:09:37.942 21173.169 - 21273.994: 97.5841% ( 7) 00:09:37.942 21273.994 - 21374.818: 97.6562% ( 6) 00:09:37.942 21374.818 - 21475.643: 97.7163% ( 5) 00:09:37.942 21475.643 - 21576.468: 97.7885% ( 6) 00:09:37.942 21576.468 - 21677.292: 97.8486% ( 5) 00:09:37.942 21677.292 - 21778.117: 97.9207% ( 6) 00:09:37.942 21778.117 - 21878.942: 97.9928% ( 6) 00:09:37.942 21878.942 - 21979.766: 98.0529% ( 5) 00:09:37.942 21979.766 - 22080.591: 98.1130% ( 5) 00:09:37.942 22080.591 - 22181.415: 98.1851% ( 6) 00:09:37.942 22181.415 - 22282.240: 98.2572% ( 6) 00:09:37.942 22282.240 - 22383.065: 98.3293% ( 6) 00:09:37.942 22383.065 - 22483.889: 98.4135% ( 7) 00:09:37.942 22483.889 - 22584.714: 98.4495% ( 3) 00:09:37.942 22584.714 - 22685.538: 98.4615% ( 1) 00:09:37.942 33473.772 - 33675.422: 98.5096% ( 4) 00:09:37.942 33675.422 - 33877.071: 98.5938% ( 7) 00:09:37.942 33877.071 - 34078.720: 98.6779% ( 7) 00:09:37.942 34078.720 - 34280.369: 98.7740% ( 8) 00:09:37.942 34280.369 - 34482.018: 98.8462% ( 6) 00:09:37.942 34482.018 - 34683.668: 98.9303% ( 7) 00:09:37.942 34683.668 - 34885.317: 99.0144% ( 7) 00:09:37.942 34885.317 - 35086.966: 99.0986% ( 7) 00:09:37.942 35086.966 - 35288.615: 99.1827% ( 7) 00:09:37.942 35288.615 - 35490.265: 99.2668% ( 7) 00:09:37.942 35490.265 - 35691.914: 99.3510% ( 7) 00:09:37.942 35691.914 - 35893.563: 99.4231% ( 6) 00:09:37.942 35893.563 - 36095.212: 99.5192% ( 8) 00:09:37.942 36095.212 - 36296.862: 99.6034% ( 7) 00:09:37.942 36296.862 - 36498.511: 99.6875% ( 7) 00:09:37.942 36498.511 - 36700.160: 99.7596% ( 6) 00:09:37.942 36700.160 - 36901.809: 99.8438% ( 7) 00:09:37.942 36901.809 - 37103.458: 99.9279% ( 7) 00:09:37.942 37103.458 - 37305.108: 100.0000% ( 6) 00:09:37.942 00:09:37.942 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:37.942 ============================================================================== 00:09:37.942 Range in us Cumulative IO count 00:09:37.942 9981.637 - 10032.049: 0.0240% ( 2) 00:09:37.942 10032.049 - 10082.462: 0.0841% ( 5) 00:09:37.942 10082.462 - 10132.874: 0.0962% ( 1) 00:09:37.942 10132.874 - 10183.286: 0.1202% ( 2) 00:09:37.942 10183.286 - 10233.698: 0.1442% ( 2) 00:09:37.942 10233.698 - 10284.111: 0.1803% ( 3) 00:09:37.942 10284.111 - 10334.523: 0.2043% ( 2) 00:09:37.942 10334.523 - 10384.935: 0.2524% ( 4) 00:09:37.942 10384.935 - 10435.348: 0.2764% ( 2) 00:09:37.942 10435.348 - 10485.760: 0.3125% ( 3) 00:09:37.942 10485.760 - 10536.172: 0.3365% ( 2) 00:09:37.942 10536.172 - 10586.585: 0.3606% ( 2) 00:09:37.942 10586.585 - 10636.997: 0.3846% ( 2) 00:09:37.942 10636.997 - 10687.409: 0.4207% ( 3) 00:09:37.942 10687.409 - 10737.822: 0.4447% ( 2) 00:09:37.942 10737.822 - 10788.234: 0.4688% ( 2) 00:09:37.942 10788.234 - 10838.646: 0.4928% ( 2) 00:09:37.942 10838.646 - 10889.058: 0.5288% ( 3) 00:09:37.942 10889.058 - 10939.471: 0.5529% ( 2) 00:09:37.942 10939.471 - 10989.883: 0.5889% ( 3) 00:09:37.942 10989.883 - 11040.295: 0.6130% ( 2) 00:09:37.942 11040.295 - 11090.708: 0.6490% ( 3) 00:09:37.942 11090.708 - 11141.120: 0.6731% ( 2) 00:09:37.942 11141.120 - 11191.532: 0.6851% ( 1) 00:09:37.942 11191.532 - 11241.945: 0.7212% ( 3) 00:09:37.942 11241.945 - 11292.357: 0.7452% ( 2) 00:09:37.942 11292.357 - 11342.769: 0.7812% ( 3) 00:09:37.942 11342.769 - 11393.182: 0.8053% ( 2) 00:09:37.942 11393.182 - 11443.594: 0.8293% ( 2) 00:09:37.942 11443.594 - 11494.006: 0.8654% ( 3) 00:09:37.942 11494.006 - 11544.418: 0.8894% ( 2) 00:09:37.942 11544.418 - 11594.831: 0.9255% ( 3) 00:09:37.942 11594.831 - 11645.243: 0.9495% ( 2) 00:09:37.942 11645.243 - 11695.655: 0.9856% ( 3) 00:09:37.942 11695.655 - 11746.068: 1.0096% ( 2) 00:09:37.942 11746.068 - 11796.480: 1.0337% ( 2) 00:09:37.942 11796.480 - 11846.892: 1.0697% ( 3) 00:09:37.942 11846.892 - 11897.305: 1.0938% ( 2) 00:09:37.942 11897.305 - 11947.717: 1.1298% ( 3) 00:09:37.942 11947.717 - 11998.129: 1.1538% ( 2) 00:09:37.942 11998.129 - 12048.542: 1.2139% ( 5) 00:09:37.942 12048.542 - 12098.954: 1.2620% ( 4) 00:09:37.942 12098.954 - 12149.366: 1.3341% ( 6) 00:09:37.942 12149.366 - 12199.778: 1.3942% ( 5) 00:09:37.942 12199.778 - 12250.191: 1.4784% ( 7) 00:09:37.942 12250.191 - 12300.603: 1.5745% ( 8) 00:09:37.942 12300.603 - 12351.015: 1.6707% ( 8) 00:09:37.942 12351.015 - 12401.428: 1.7909% ( 10) 00:09:37.942 12401.428 - 12451.840: 1.9712% ( 15) 00:09:37.942 12451.840 - 12502.252: 2.0913% ( 10) 00:09:37.942 12502.252 - 12552.665: 2.2476% ( 13) 00:09:37.942 12552.665 - 12603.077: 2.4038% ( 13) 00:09:37.942 12603.077 - 12653.489: 2.5601% ( 13) 00:09:37.942 12653.489 - 12703.902: 2.7764% ( 18) 00:09:37.942 12703.902 - 12754.314: 3.0529% ( 23) 00:09:37.942 12754.314 - 12804.726: 3.3894% ( 28) 00:09:37.942 12804.726 - 12855.138: 3.7740% ( 32) 00:09:37.942 12855.138 - 12905.551: 4.1106% ( 28) 00:09:37.942 12905.551 - 13006.375: 5.0601% ( 79) 00:09:37.942 13006.375 - 13107.200: 5.9856% ( 77) 00:09:37.942 13107.200 - 13208.025: 7.2476% ( 105) 00:09:37.942 13208.025 - 13308.849: 8.7500% ( 125) 00:09:37.942 13308.849 - 13409.674: 10.5769% ( 152) 00:09:37.942 13409.674 - 13510.498: 12.3918% ( 151) 00:09:37.942 13510.498 - 13611.323: 14.6514% ( 188) 00:09:37.942 13611.323 - 13712.148: 16.8029% ( 179) 00:09:37.942 13712.148 - 13812.972: 19.1707% ( 197) 00:09:37.942 13812.972 - 13913.797: 21.7308% ( 213) 00:09:37.942 13913.797 - 14014.622: 24.5312% ( 233) 00:09:37.942 14014.622 - 14115.446: 27.4880% ( 246) 00:09:37.942 14115.446 - 14216.271: 30.7933% ( 275) 00:09:37.942 14216.271 - 14317.095: 34.0745% ( 273) 00:09:37.942 14317.095 - 14417.920: 37.1514% ( 256) 00:09:37.942 14417.920 - 14518.745: 40.2524% ( 258) 00:09:37.942 14518.745 - 14619.569: 43.4976% ( 270) 00:09:37.942 14619.569 - 14720.394: 46.6827% ( 265) 00:09:37.942 14720.394 - 14821.218: 49.7115% ( 252) 00:09:37.942 14821.218 - 14922.043: 52.8005% ( 257) 00:09:37.942 14922.043 - 15022.868: 55.8774% ( 256) 00:09:37.942 15022.868 - 15123.692: 58.5697% ( 224) 00:09:37.942 15123.692 - 15224.517: 61.3462% ( 231) 00:09:37.942 15224.517 - 15325.342: 63.9062% ( 213) 00:09:37.942 15325.342 - 15426.166: 66.4062% ( 208) 00:09:37.942 15426.166 - 15526.991: 68.7019% ( 191) 00:09:37.942 15526.991 - 15627.815: 70.7091% ( 167) 00:09:37.942 15627.815 - 15728.640: 72.5000% ( 149) 00:09:37.942 15728.640 - 15829.465: 74.1947% ( 141) 00:09:37.942 15829.465 - 15930.289: 75.6731% ( 123) 00:09:37.942 15930.289 - 16031.114: 77.0312% ( 113) 00:09:37.942 16031.114 - 16131.938: 78.2933% ( 105) 00:09:37.942 16131.938 - 16232.763: 79.3870% ( 91) 00:09:37.942 16232.763 - 16333.588: 80.3245% ( 78) 00:09:37.942 16333.588 - 16434.412: 81.0697% ( 62) 00:09:37.942 16434.412 - 16535.237: 81.7909% ( 60) 00:09:37.942 16535.237 - 16636.062: 82.4279% ( 53) 00:09:37.942 16636.062 - 16736.886: 83.0769% ( 54) 00:09:37.942 16736.886 - 16837.711: 83.6659% ( 49) 00:09:37.942 16837.711 - 16938.535: 84.2308% ( 47) 00:09:37.942 16938.535 - 17039.360: 84.7837% ( 46) 00:09:37.942 17039.360 - 17140.185: 85.2524% ( 39) 00:09:37.942 17140.185 - 17241.009: 85.6971% ( 37) 00:09:37.942 17241.009 - 17341.834: 86.1298% ( 36) 00:09:37.942 17341.834 - 17442.658: 86.5865% ( 38) 00:09:37.942 17442.658 - 17543.483: 87.0913% ( 42) 00:09:37.942 17543.483 - 17644.308: 87.5721% ( 40) 00:09:37.942 17644.308 - 17745.132: 88.0409% ( 39) 00:09:37.942 17745.132 - 17845.957: 88.4014% ( 30) 00:09:37.942 17845.957 - 17946.782: 88.7740% ( 31) 00:09:37.942 17946.782 - 18047.606: 89.1346% ( 30) 00:09:37.942 18047.606 - 18148.431: 89.5192% ( 32) 00:09:37.943 18148.431 - 18249.255: 89.9038% ( 32) 00:09:37.943 18249.255 - 18350.080: 90.2764% ( 31) 00:09:37.943 18350.080 - 18450.905: 90.6851% ( 34) 00:09:37.943 18450.905 - 18551.729: 91.0697% ( 32) 00:09:37.943 18551.729 - 18652.554: 91.6106% ( 45) 00:09:37.943 18652.554 - 18753.378: 92.0433% ( 36) 00:09:37.943 18753.378 - 18854.203: 92.4519% ( 34) 00:09:37.943 18854.203 - 18955.028: 92.8125% ( 30) 00:09:37.943 18955.028 - 19055.852: 93.1250% ( 26) 00:09:37.943 19055.852 - 19156.677: 93.4495% ( 27) 00:09:37.943 19156.677 - 19257.502: 93.7620% ( 26) 00:09:37.943 19257.502 - 19358.326: 94.0745% ( 26) 00:09:37.943 19358.326 - 19459.151: 94.3750% ( 25) 00:09:37.943 19459.151 - 19559.975: 94.6875% ( 26) 00:09:37.943 19559.975 - 19660.800: 95.0240% ( 28) 00:09:37.943 19660.800 - 19761.625: 95.3245% ( 25) 00:09:37.943 19761.625 - 19862.449: 95.6370% ( 26) 00:09:37.943 19862.449 - 19963.274: 95.9255% ( 24) 00:09:37.943 19963.274 - 20064.098: 96.1538% ( 19) 00:09:37.943 20064.098 - 20164.923: 96.3462% ( 16) 00:09:37.943 20164.923 - 20265.748: 96.5625% ( 18) 00:09:37.943 20265.748 - 20366.572: 96.7788% ( 18) 00:09:37.943 20366.572 - 20467.397: 96.9952% ( 18) 00:09:37.943 20467.397 - 20568.222: 97.1995% ( 17) 00:09:37.943 20568.222 - 20669.046: 97.3918% ( 16) 00:09:37.943 20669.046 - 20769.871: 97.5361% ( 12) 00:09:37.943 20769.871 - 20870.695: 97.6082% ( 6) 00:09:37.943 20870.695 - 20971.520: 97.6803% ( 6) 00:09:37.943 20971.520 - 21072.345: 97.7524% ( 6) 00:09:37.943 21072.345 - 21173.169: 97.8245% ( 6) 00:09:37.943 21173.169 - 21273.994: 97.8966% ( 6) 00:09:37.943 21273.994 - 21374.818: 97.9688% ( 6) 00:09:37.943 21374.818 - 21475.643: 98.0409% ( 6) 00:09:37.943 21475.643 - 21576.468: 98.1010% ( 5) 00:09:37.943 21576.468 - 21677.292: 98.1731% ( 6) 00:09:37.943 21677.292 - 21778.117: 98.2452% ( 6) 00:09:37.943 21778.117 - 21878.942: 98.3173% ( 6) 00:09:37.943 21878.942 - 21979.766: 98.3774% ( 5) 00:09:37.943 21979.766 - 22080.591: 98.4135% ( 3) 00:09:37.943 22080.591 - 22181.415: 98.4495% ( 3) 00:09:37.943 22181.415 - 22282.240: 98.4615% ( 1) 00:09:37.943 34683.668 - 34885.317: 98.4736% ( 1) 00:09:37.943 34885.317 - 35086.966: 98.5216% ( 4) 00:09:37.943 35086.966 - 35288.615: 98.5697% ( 4) 00:09:37.943 35288.615 - 35490.265: 98.6178% ( 4) 00:09:37.943 35490.265 - 35691.914: 98.6899% ( 6) 00:09:37.943 35691.914 - 35893.563: 98.7740% ( 7) 00:09:37.943 35893.563 - 36095.212: 98.8582% ( 7) 00:09:37.943 36095.212 - 36296.862: 98.9423% ( 7) 00:09:37.943 36296.862 - 36498.511: 99.0264% ( 7) 00:09:37.943 36498.511 - 36700.160: 99.1226% ( 8) 00:09:37.943 36700.160 - 36901.809: 99.1947% ( 6) 00:09:37.943 36901.809 - 37103.458: 99.2788% ( 7) 00:09:37.943 37103.458 - 37305.108: 99.3510% ( 6) 00:09:37.943 37305.108 - 37506.757: 99.4471% ( 8) 00:09:37.943 37506.757 - 37708.406: 99.5312% ( 7) 00:09:37.943 37708.406 - 37910.055: 99.6154% ( 7) 00:09:37.943 37910.055 - 38111.705: 99.6995% ( 7) 00:09:37.943 38111.705 - 38313.354: 99.7837% ( 7) 00:09:37.943 38313.354 - 38515.003: 99.8678% ( 7) 00:09:37.943 38515.003 - 38716.652: 99.9639% ( 8) 00:09:37.943 38716.652 - 38918.302: 100.0000% ( 3) 00:09:37.943 00:09:37.943 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:37.943 ============================================================================== 00:09:37.943 Range in us Cumulative IO count 00:09:37.943 9326.277 - 9376.689: 0.0118% ( 1) 00:09:37.943 9376.689 - 9427.102: 0.0473% ( 3) 00:09:37.943 9427.102 - 9477.514: 0.0829% ( 3) 00:09:37.943 9477.514 - 9527.926: 0.1420% ( 5) 00:09:37.943 9527.926 - 9578.338: 0.1657% ( 2) 00:09:37.943 9578.338 - 9628.751: 0.1894% ( 2) 00:09:37.943 9628.751 - 9679.163: 0.2131% ( 2) 00:09:37.943 9679.163 - 9729.575: 0.2249% ( 1) 00:09:37.943 9729.575 - 9779.988: 0.2486% ( 2) 00:09:37.943 9779.988 - 9830.400: 0.2723% ( 2) 00:09:37.943 9830.400 - 9880.812: 0.2959% ( 2) 00:09:37.943 9880.812 - 9931.225: 0.3314% ( 3) 00:09:37.943 9931.225 - 9981.637: 0.3551% ( 2) 00:09:37.943 9981.637 - 10032.049: 0.3906% ( 3) 00:09:37.943 10032.049 - 10082.462: 0.4261% ( 3) 00:09:37.943 10082.462 - 10132.874: 0.4498% ( 2) 00:09:37.943 10132.874 - 10183.286: 0.4735% ( 2) 00:09:37.943 10183.286 - 10233.698: 0.5090% ( 3) 00:09:37.943 10233.698 - 10284.111: 0.5327% ( 2) 00:09:37.943 10284.111 - 10334.523: 0.5563% ( 2) 00:09:37.943 10334.523 - 10384.935: 0.5919% ( 3) 00:09:37.943 10384.935 - 10435.348: 0.6155% ( 2) 00:09:37.943 10435.348 - 10485.760: 0.6510% ( 3) 00:09:37.943 10485.760 - 10536.172: 0.6747% ( 2) 00:09:37.943 10536.172 - 10586.585: 0.6984% ( 2) 00:09:37.943 10586.585 - 10636.997: 0.7221% ( 2) 00:09:37.943 10636.997 - 10687.409: 0.7576% ( 3) 00:09:37.943 10687.409 - 10737.822: 0.7812% ( 2) 00:09:37.943 10737.822 - 10788.234: 0.8049% ( 2) 00:09:37.943 10788.234 - 10838.646: 0.8286% ( 2) 00:09:37.943 10838.646 - 10889.058: 0.8641% ( 3) 00:09:37.943 10889.058 - 10939.471: 0.8878% ( 2) 00:09:37.943 10939.471 - 10989.883: 0.9233% ( 3) 00:09:37.943 10989.883 - 11040.295: 0.9470% ( 2) 00:09:37.943 11040.295 - 11090.708: 0.9825% ( 3) 00:09:37.943 11090.708 - 11141.120: 1.0062% ( 2) 00:09:37.943 11141.120 - 11191.532: 1.0298% ( 2) 00:09:37.943 11191.532 - 11241.945: 1.0417% ( 1) 00:09:37.943 11241.945 - 11292.357: 1.0772% ( 3) 00:09:37.943 11292.357 - 11342.769: 1.1009% ( 2) 00:09:37.943 11342.769 - 11393.182: 1.1245% ( 2) 00:09:37.943 11393.182 - 11443.594: 1.1482% ( 2) 00:09:37.943 11443.594 - 11494.006: 1.1719% ( 2) 00:09:37.943 11494.006 - 11544.418: 1.1955% ( 2) 00:09:37.943 11544.418 - 11594.831: 1.2311% ( 3) 00:09:37.943 11594.831 - 11645.243: 1.2547% ( 2) 00:09:37.943 11645.243 - 11695.655: 1.2784% ( 2) 00:09:37.943 11695.655 - 11746.068: 1.3021% ( 2) 00:09:37.943 11746.068 - 11796.480: 1.3376% ( 3) 00:09:37.943 11796.480 - 11846.892: 1.3613% ( 2) 00:09:37.943 11846.892 - 11897.305: 1.3849% ( 2) 00:09:37.943 11897.305 - 11947.717: 1.4086% ( 2) 00:09:37.943 11947.717 - 11998.129: 1.4323% ( 2) 00:09:37.943 11998.129 - 12048.542: 1.4560% ( 2) 00:09:37.943 12048.542 - 12098.954: 1.4915% ( 3) 00:09:37.943 12098.954 - 12149.366: 1.5152% ( 2) 00:09:37.943 12149.366 - 12199.778: 1.5388% ( 2) 00:09:37.943 12199.778 - 12250.191: 1.5743% ( 3) 00:09:37.943 12250.191 - 12300.603: 1.6217% ( 4) 00:09:37.943 12300.603 - 12351.015: 1.6809% ( 5) 00:09:37.943 12351.015 - 12401.428: 1.7637% ( 7) 00:09:37.943 12401.428 - 12451.840: 2.0123% ( 21) 00:09:37.943 12451.840 - 12502.252: 2.2017% ( 16) 00:09:37.943 12502.252 - 12552.665: 2.3556% ( 13) 00:09:37.943 12552.665 - 12603.077: 2.5568% ( 17) 00:09:37.943 12603.077 - 12653.489: 2.7699% ( 18) 00:09:37.943 12653.489 - 12703.902: 2.9356% ( 14) 00:09:37.943 12703.902 - 12754.314: 3.1368% ( 17) 00:09:37.943 12754.314 - 12804.726: 3.4446% ( 26) 00:09:37.943 12804.726 - 12855.138: 3.8707% ( 36) 00:09:37.943 12855.138 - 12905.551: 4.2850% ( 35) 00:09:37.943 12905.551 - 13006.375: 5.3149% ( 87) 00:09:37.943 13006.375 - 13107.200: 6.3802% ( 90) 00:09:37.943 13107.200 - 13208.025: 7.5284% ( 97) 00:09:37.943 13208.025 - 13308.849: 9.0080% ( 125) 00:09:37.943 13308.849 - 13409.674: 10.8310% ( 154) 00:09:37.943 13409.674 - 13510.498: 12.8314% ( 169) 00:09:37.943 13510.498 - 13611.323: 14.9740% ( 181) 00:09:37.943 13611.323 - 13712.148: 17.3414% ( 200) 00:09:37.943 13712.148 - 13812.972: 20.1349% ( 236) 00:09:37.943 13812.972 - 13913.797: 22.9640% ( 239) 00:09:37.943 13913.797 - 14014.622: 25.7812% ( 238) 00:09:37.943 14014.622 - 14115.446: 28.6103% ( 239) 00:09:37.943 14115.446 - 14216.271: 31.6288% ( 255) 00:09:37.943 14216.271 - 14317.095: 34.7538% ( 264) 00:09:37.943 14317.095 - 14417.920: 37.7249% ( 251) 00:09:37.943 14417.920 - 14518.745: 40.7315% ( 254) 00:09:37.943 14518.745 - 14619.569: 43.7500% ( 255) 00:09:37.943 14619.569 - 14720.394: 46.5672% ( 238) 00:09:37.943 14720.394 - 14821.218: 49.4555% ( 244) 00:09:37.943 14821.218 - 14922.043: 52.2491% ( 236) 00:09:37.943 14922.043 - 15022.868: 55.0545% ( 237) 00:09:37.943 15022.868 - 15123.692: 57.7060% ( 224) 00:09:37.943 15123.692 - 15224.517: 60.2628% ( 216) 00:09:37.943 15224.517 - 15325.342: 62.5947% ( 197) 00:09:37.943 15325.342 - 15426.166: 64.9740% ( 201) 00:09:37.944 15426.166 - 15526.991: 67.1046% ( 180) 00:09:37.944 15526.991 - 15627.815: 69.0578% ( 165) 00:09:37.944 15627.815 - 15728.640: 70.7741% ( 145) 00:09:37.944 15728.640 - 15829.465: 72.2301% ( 123) 00:09:37.944 15829.465 - 15930.289: 73.4848% ( 106) 00:09:37.944 15930.289 - 16031.114: 74.6212% ( 96) 00:09:37.944 16031.114 - 16131.938: 75.6747% ( 89) 00:09:37.944 16131.938 - 16232.763: 76.8703% ( 101) 00:09:37.944 16232.763 - 16333.588: 78.0185% ( 97) 00:09:37.944 16333.588 - 16434.412: 79.1548% ( 96) 00:09:37.944 16434.412 - 16535.237: 80.2557% ( 93) 00:09:37.944 16535.237 - 16636.062: 81.2618% ( 85) 00:09:37.944 16636.062 - 16736.886: 82.2088% ( 80) 00:09:37.944 16736.886 - 16837.711: 83.0729% ( 73) 00:09:37.944 16837.711 - 16938.535: 83.9134% ( 71) 00:09:37.944 16938.535 - 17039.360: 84.7301% ( 69) 00:09:37.944 17039.360 - 17140.185: 85.5232% ( 67) 00:09:37.944 17140.185 - 17241.009: 86.3045% ( 66) 00:09:37.944 17241.009 - 17341.834: 87.0028% ( 59) 00:09:37.944 17341.834 - 17442.658: 87.6539% ( 55) 00:09:37.944 17442.658 - 17543.483: 88.2339% ( 49) 00:09:37.944 17543.483 - 17644.308: 88.8258% ( 50) 00:09:37.944 17644.308 - 17745.132: 89.3229% ( 42) 00:09:37.944 17745.132 - 17845.957: 89.7846% ( 39) 00:09:37.944 17845.957 - 17946.782: 90.2225% ( 37) 00:09:37.944 17946.782 - 18047.606: 90.5658% ( 29) 00:09:37.944 18047.606 - 18148.431: 90.9328% ( 31) 00:09:37.944 18148.431 - 18249.255: 91.3471% ( 35) 00:09:37.944 18249.255 - 18350.080: 91.7140% ( 31) 00:09:37.944 18350.080 - 18450.905: 92.0691% ( 30) 00:09:37.944 18450.905 - 18551.729: 92.4716% ( 34) 00:09:37.944 18551.729 - 18652.554: 92.8504% ( 32) 00:09:37.944 18652.554 - 18753.378: 93.1345% ( 24) 00:09:37.944 18753.378 - 18854.203: 93.4186% ( 24) 00:09:37.944 18854.203 - 18955.028: 93.6908% ( 23) 00:09:37.944 18955.028 - 19055.852: 93.9749% ( 24) 00:09:37.944 19055.852 - 19156.677: 94.2353% ( 22) 00:09:37.944 19156.677 - 19257.502: 94.4721% ( 20) 00:09:37.944 19257.502 - 19358.326: 94.7206% ( 21) 00:09:37.944 19358.326 - 19459.151: 94.9692% ( 21) 00:09:37.944 19459.151 - 19559.975: 95.2178% ( 21) 00:09:37.944 19559.975 - 19660.800: 95.4782% ( 22) 00:09:37.944 19660.800 - 19761.625: 95.7386% ( 22) 00:09:37.944 19761.625 - 19862.449: 96.0227% ( 24) 00:09:37.944 19862.449 - 19963.274: 96.2831% ( 22) 00:09:37.944 19963.274 - 20064.098: 96.5436% ( 22) 00:09:37.944 20064.098 - 20164.923: 96.8040% ( 22) 00:09:37.944 20164.923 - 20265.748: 97.0762% ( 23) 00:09:37.944 20265.748 - 20366.572: 97.3366% ( 22) 00:09:37.944 20366.572 - 20467.397: 97.5734% ( 20) 00:09:37.944 20467.397 - 20568.222: 97.8101% ( 20) 00:09:37.944 20568.222 - 20669.046: 97.9522% ( 12) 00:09:37.944 20669.046 - 20769.871: 98.1297% ( 15) 00:09:37.944 20769.871 - 20870.695: 98.2718% ( 12) 00:09:37.944 20870.695 - 20971.520: 98.4493% ( 15) 00:09:37.944 20971.520 - 21072.345: 98.6032% ( 13) 00:09:37.944 21072.345 - 21173.169: 98.7689% ( 14) 00:09:37.944 21173.169 - 21273.994: 98.8281% ( 5) 00:09:37.944 21273.994 - 21374.818: 98.8755% ( 4) 00:09:37.944 21374.818 - 21475.643: 98.9110% ( 3) 00:09:37.944 21475.643 - 21576.468: 98.9583% ( 4) 00:09:37.944 21576.468 - 21677.292: 98.9938% ( 3) 00:09:37.944 21677.292 - 21778.117: 99.0294% ( 3) 00:09:37.944 21778.117 - 21878.942: 99.0767% ( 4) 00:09:37.944 21878.942 - 21979.766: 99.1241% ( 4) 00:09:37.944 21979.766 - 22080.591: 99.1714% ( 4) 00:09:37.944 22080.591 - 22181.415: 99.2069% ( 3) 00:09:37.944 22181.415 - 22282.240: 99.2543% ( 4) 00:09:37.944 22282.240 - 22383.065: 99.2898% ( 3) 00:09:37.944 22383.065 - 22483.889: 99.3371% ( 4) 00:09:37.944 22483.889 - 22584.714: 99.3845% ( 4) 00:09:37.944 22584.714 - 22685.538: 99.4318% ( 4) 00:09:37.944 22685.538 - 22786.363: 99.4673% ( 3) 00:09:37.944 22786.363 - 22887.188: 99.5147% ( 4) 00:09:37.944 22887.188 - 22988.012: 99.5502% ( 3) 00:09:37.944 22988.012 - 23088.837: 99.5857% ( 3) 00:09:37.944 23088.837 - 23189.662: 99.6212% ( 3) 00:09:37.944 23189.662 - 23290.486: 99.6686% ( 4) 00:09:37.944 23290.486 - 23391.311: 99.7159% ( 4) 00:09:37.944 23391.311 - 23492.135: 99.7514% ( 3) 00:09:37.944 23492.135 - 23592.960: 99.7988% ( 4) 00:09:37.944 23592.960 - 23693.785: 99.8461% ( 4) 00:09:37.944 23693.785 - 23794.609: 99.8935% ( 4) 00:09:37.944 23794.609 - 23895.434: 99.9290% ( 3) 00:09:37.944 23895.434 - 23996.258: 99.9763% ( 4) 00:09:37.944 23996.258 - 24097.083: 100.0000% ( 2) 00:09:37.944 00:09:37.944 20:16:52 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:39.327 Initializing NVMe Controllers 00:09:39.327 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:39.327 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:39.327 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:39.327 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:39.327 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:39.327 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:39.327 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:39.327 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:39.327 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:39.327 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:39.327 Initialization complete. Launching workers. 00:09:39.327 ======================================================== 00:09:39.327 Latency(us) 00:09:39.327 Device Information : IOPS MiB/s Average min max 00:09:39.327 PCIE (0000:00:07.0) NSID 1 from core 0: 8202.21 96.12 15601.92 9654.65 37266.83 00:09:39.327 PCIE (0000:00:09.0) NSID 1 from core 0: 8202.21 96.12 15591.19 8988.42 37521.45 00:09:39.327 PCIE (0000:00:06.0) NSID 1 from core 0: 8202.21 96.12 15573.88 7956.83 37672.92 00:09:39.327 PCIE (0000:00:08.0) NSID 1 from core 0: 8202.21 96.12 15558.81 7691.29 36848.27 00:09:39.327 PCIE (0000:00:08.0) NSID 2 from core 0: 8202.21 96.12 15545.04 10998.28 39052.79 00:09:39.327 PCIE (0000:00:08.0) NSID 3 from core 0: 8328.40 97.60 15295.41 9896.15 23094.98 00:09:39.327 ======================================================== 00:09:39.327 Total : 49339.44 578.20 15527.11 7691.29 39052.79 00:09:39.327 00:09:39.327 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:39.327 ================================================================================= 00:09:39.327 1.00000% : 10435.348us 00:09:39.327 10.00000% : 13510.498us 00:09:39.327 25.00000% : 14518.745us 00:09:39.327 50.00000% : 15526.991us 00:09:39.327 75.00000% : 16434.412us 00:09:39.328 90.00000% : 17241.009us 00:09:39.328 95.00000% : 17845.957us 00:09:39.328 98.00000% : 18450.905us 00:09:39.328 99.00000% : 35691.914us 00:09:39.328 99.50000% : 36296.862us 00:09:39.328 99.90000% : 37103.458us 00:09:39.328 99.99000% : 37305.108us 00:09:39.328 99.99900% : 37305.108us 00:09:39.328 99.99990% : 37305.108us 00:09:39.328 99.99999% : 37305.108us 00:09:39.328 00:09:39.328 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:39.328 ================================================================================= 00:09:39.328 1.00000% : 9477.514us 00:09:39.328 10.00000% : 13510.498us 00:09:39.328 25.00000% : 14619.569us 00:09:39.328 50.00000% : 15526.991us 00:09:39.328 75.00000% : 16333.588us 00:09:39.328 90.00000% : 17241.009us 00:09:39.328 95.00000% : 18148.431us 00:09:39.328 98.00000% : 19459.151us 00:09:39.328 99.00000% : 35288.615us 00:09:39.328 99.50000% : 36498.511us 00:09:39.328 99.90000% : 37305.108us 00:09:39.328 99.99000% : 37708.406us 00:09:39.328 99.99900% : 37708.406us 00:09:39.328 99.99990% : 37708.406us 00:09:39.328 99.99999% : 37708.406us 00:09:39.328 00:09:39.328 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:39.328 ================================================================================= 00:09:39.328 1.00000% : 8318.031us 00:09:39.328 10.00000% : 13409.674us 00:09:39.328 25.00000% : 14518.745us 00:09:39.328 50.00000% : 15526.991us 00:09:39.328 75.00000% : 16535.237us 00:09:39.328 90.00000% : 17543.483us 00:09:39.328 95.00000% : 18350.080us 00:09:39.328 98.00000% : 20366.572us 00:09:39.328 99.00000% : 35086.966us 00:09:39.328 99.50000% : 36498.511us 00:09:39.328 99.90000% : 37506.757us 00:09:39.328 99.99000% : 37708.406us 00:09:39.328 99.99900% : 37708.406us 00:09:39.328 99.99990% : 37708.406us 00:09:39.328 99.99999% : 37708.406us 00:09:39.328 00:09:39.328 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:39.328 ================================================================================= 00:09:39.328 1.00000% : 8015.557us 00:09:39.328 10.00000% : 13611.323us 00:09:39.328 25.00000% : 14619.569us 00:09:39.328 50.00000% : 15426.166us 00:09:39.328 75.00000% : 16434.412us 00:09:39.328 90.00000% : 17341.834us 00:09:39.328 95.00000% : 18047.606us 00:09:39.328 98.00000% : 19862.449us 00:09:39.328 99.00000% : 34482.018us 00:09:39.328 99.50000% : 35893.563us 00:09:39.328 99.90000% : 36700.160us 00:09:39.328 99.99000% : 36901.809us 00:09:39.328 99.99900% : 36901.809us 00:09:39.328 99.99990% : 36901.809us 00:09:39.328 99.99999% : 36901.809us 00:09:39.328 00:09:39.328 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:39.328 ================================================================================= 00:09:39.328 1.00000% : 11443.594us 00:09:39.328 10.00000% : 13208.025us 00:09:39.328 25.00000% : 14317.095us 00:09:39.328 50.00000% : 15426.166us 00:09:39.328 75.00000% : 16434.412us 00:09:39.328 90.00000% : 17241.009us 00:09:39.328 95.00000% : 17745.132us 00:09:39.328 98.00000% : 19257.502us 00:09:39.328 99.00000% : 36700.160us 00:09:39.328 99.50000% : 38111.705us 00:09:39.328 99.90000% : 38918.302us 00:09:39.328 99.99000% : 39119.951us 00:09:39.328 99.99900% : 39119.951us 00:09:39.328 99.99990% : 39119.951us 00:09:39.328 99.99999% : 39119.951us 00:09:39.328 00:09:39.328 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:39.328 ================================================================================= 00:09:39.328 1.00000% : 10939.471us 00:09:39.328 10.00000% : 12855.138us 00:09:39.328 25.00000% : 14518.745us 00:09:39.328 50.00000% : 15426.166us 00:09:39.328 75.00000% : 16434.412us 00:09:39.328 90.00000% : 17140.185us 00:09:39.328 95.00000% : 17543.483us 00:09:39.328 98.00000% : 18249.255us 00:09:39.328 99.00000% : 20769.871us 00:09:39.328 99.50000% : 21979.766us 00:09:39.328 99.90000% : 22887.188us 00:09:39.328 99.99000% : 23189.662us 00:09:39.328 99.99900% : 23189.662us 00:09:39.328 99.99990% : 23189.662us 00:09:39.328 99.99999% : 23189.662us 00:09:39.328 00:09:39.328 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:39.328 ============================================================================== 00:09:39.328 Range in us Cumulative IO count 00:09:39.328 9628.751 - 9679.163: 0.0120% ( 1) 00:09:39.328 9679.163 - 9729.575: 0.0481% ( 3) 00:09:39.328 9729.575 - 9779.988: 0.0962% ( 4) 00:09:39.328 9779.988 - 9830.400: 0.1322% ( 3) 00:09:39.328 9830.400 - 9880.812: 0.1803% ( 4) 00:09:39.328 9880.812 - 9931.225: 0.2163% ( 3) 00:09:39.328 9931.225 - 9981.637: 0.2524% ( 3) 00:09:39.328 9981.637 - 10032.049: 0.2885% ( 3) 00:09:39.328 10032.049 - 10082.462: 0.3365% ( 4) 00:09:39.328 10082.462 - 10132.874: 0.3726% ( 3) 00:09:39.328 10132.874 - 10183.286: 0.4327% ( 5) 00:09:39.328 10183.286 - 10233.698: 0.5168% ( 7) 00:09:39.328 10233.698 - 10284.111: 0.6010% ( 7) 00:09:39.328 10284.111 - 10334.523: 0.7212% ( 10) 00:09:39.328 10334.523 - 10384.935: 0.9495% ( 19) 00:09:39.328 10384.935 - 10435.348: 1.0697% ( 10) 00:09:39.328 10435.348 - 10485.760: 1.2019% ( 11) 00:09:39.328 10485.760 - 10536.172: 1.2981% ( 8) 00:09:39.328 10536.172 - 10586.585: 1.3582% ( 5) 00:09:39.328 10586.585 - 10636.997: 1.4543% ( 8) 00:09:39.328 10636.997 - 10687.409: 1.5986% ( 12) 00:09:39.328 10687.409 - 10737.822: 1.7668% ( 14) 00:09:39.328 10737.822 - 10788.234: 1.9231% ( 13) 00:09:39.328 10788.234 - 10838.646: 2.0673% ( 12) 00:09:39.328 10838.646 - 10889.058: 2.2596% ( 16) 00:09:39.328 10889.058 - 10939.471: 2.5601% ( 25) 00:09:39.328 10939.471 - 10989.883: 2.7764% ( 18) 00:09:39.328 10989.883 - 11040.295: 2.9447% ( 14) 00:09:39.328 11040.295 - 11090.708: 3.0769% ( 11) 00:09:39.328 11090.708 - 11141.120: 3.2212% ( 12) 00:09:39.328 11141.120 - 11191.532: 3.3654% ( 12) 00:09:39.328 11191.532 - 11241.945: 3.5096% ( 12) 00:09:39.328 11241.945 - 11292.357: 3.6659% ( 13) 00:09:39.328 11292.357 - 11342.769: 3.9303% ( 22) 00:09:39.328 11342.769 - 11393.182: 4.0986% ( 14) 00:09:39.328 11393.182 - 11443.594: 4.2668% ( 14) 00:09:39.328 11443.594 - 11494.006: 4.4712% ( 17) 00:09:39.328 11494.006 - 11544.418: 4.6755% ( 17) 00:09:39.328 11544.418 - 11594.831: 4.8197% ( 12) 00:09:39.328 11594.831 - 11645.243: 5.0000% ( 15) 00:09:39.328 11645.243 - 11695.655: 5.1322% ( 11) 00:09:39.328 11695.655 - 11746.068: 5.2644% ( 11) 00:09:39.328 11746.068 - 11796.480: 5.4087% ( 12) 00:09:39.328 11796.480 - 11846.892: 5.4928% ( 7) 00:09:39.328 11846.892 - 11897.305: 5.5409% ( 4) 00:09:39.328 11897.305 - 11947.717: 5.5889% ( 4) 00:09:39.328 11947.717 - 11998.129: 5.6490% ( 5) 00:09:39.328 11998.129 - 12048.542: 5.7091% ( 5) 00:09:39.328 12048.542 - 12098.954: 5.7692% ( 5) 00:09:39.328 12098.954 - 12149.366: 5.8173% ( 4) 00:09:39.328 12149.366 - 12199.778: 5.8413% ( 2) 00:09:39.328 12199.778 - 12250.191: 5.9135% ( 6) 00:09:39.328 12250.191 - 12300.603: 5.9615% ( 4) 00:09:39.328 12300.603 - 12351.015: 6.0216% ( 5) 00:09:39.328 12351.015 - 12401.428: 6.1418% ( 10) 00:09:39.328 12401.428 - 12451.840: 6.3101% ( 14) 00:09:39.328 12451.840 - 12502.252: 6.4183% ( 9) 00:09:39.328 12502.252 - 12552.665: 6.8750% ( 38) 00:09:39.328 12552.665 - 12603.077: 7.4639% ( 49) 00:09:39.328 12603.077 - 12653.489: 7.6082% ( 12) 00:09:39.328 12653.489 - 12703.902: 7.6923% ( 7) 00:09:39.328 12703.902 - 12754.314: 7.7885% ( 8) 00:09:39.328 12754.314 - 12804.726: 7.8846% ( 8) 00:09:39.328 12804.726 - 12855.138: 7.9567% ( 6) 00:09:39.328 12855.138 - 12905.551: 8.0529% ( 8) 00:09:39.328 12905.551 - 13006.375: 8.2452% ( 16) 00:09:39.328 13006.375 - 13107.200: 8.4856% ( 20) 00:09:39.328 13107.200 - 13208.025: 8.7740% ( 24) 00:09:39.328 13208.025 - 13308.849: 9.1226% ( 29) 00:09:39.328 13308.849 - 13409.674: 9.4832% ( 30) 00:09:39.328 13409.674 - 13510.498: 10.2764% ( 66) 00:09:39.328 13510.498 - 13611.323: 11.4784% ( 100) 00:09:39.328 13611.323 - 13712.148: 12.5000% ( 85) 00:09:39.328 13712.148 - 13812.972: 13.6899% ( 99) 00:09:39.328 13812.972 - 13913.797: 15.0361% ( 112) 00:09:39.328 13913.797 - 14014.622: 16.3702% ( 111) 00:09:39.328 14014.622 - 14115.446: 17.6202% ( 104) 00:09:39.328 14115.446 - 14216.271: 19.3269% ( 142) 00:09:39.328 14216.271 - 14317.095: 21.5144% ( 182) 00:09:39.328 14317.095 - 14417.920: 23.6298% ( 176) 00:09:39.328 14417.920 - 14518.745: 25.4087% ( 148) 00:09:39.328 14518.745 - 14619.569: 27.1995% ( 149) 00:09:39.328 14619.569 - 14720.394: 28.9543% ( 146) 00:09:39.328 14720.394 - 14821.218: 31.1418% ( 182) 00:09:39.328 14821.218 - 14922.043: 33.6058% ( 205) 00:09:39.328 14922.043 - 15022.868: 36.0938% ( 207) 00:09:39.328 15022.868 - 15123.692: 38.8341% ( 228) 00:09:39.328 15123.692 - 15224.517: 41.2260% ( 199) 00:09:39.328 15224.517 - 15325.342: 44.4471% ( 268) 00:09:39.328 15325.342 - 15426.166: 47.2957% ( 237) 00:09:39.328 15426.166 - 15526.991: 50.3365% ( 253) 00:09:39.328 15526.991 - 15627.815: 53.1611% ( 235) 00:09:39.328 15627.815 - 15728.640: 56.2500% ( 257) 00:09:39.328 15728.640 - 15829.465: 59.8438% ( 299) 00:09:39.328 15829.465 - 15930.289: 62.9567% ( 259) 00:09:39.328 15930.289 - 16031.114: 65.8894% ( 244) 00:09:39.328 16031.114 - 16131.938: 68.5216% ( 219) 00:09:39.328 16131.938 - 16232.763: 71.0096% ( 207) 00:09:39.328 16232.763 - 16333.588: 73.6659% ( 221) 00:09:39.328 16333.588 - 16434.412: 75.8534% ( 182) 00:09:39.328 16434.412 - 16535.237: 77.9808% ( 177) 00:09:39.328 16535.237 - 16636.062: 80.0601% ( 173) 00:09:39.328 16636.062 - 16736.886: 82.2716% ( 184) 00:09:39.328 16736.886 - 16837.711: 84.2909% ( 168) 00:09:39.328 16837.711 - 16938.535: 86.2620% ( 164) 00:09:39.328 16938.535 - 17039.360: 87.7404% ( 123) 00:09:39.328 17039.360 - 17140.185: 88.9543% ( 101) 00:09:39.328 17140.185 - 17241.009: 90.1803% ( 102) 00:09:39.328 17241.009 - 17341.834: 91.3221% ( 95) 00:09:39.328 17341.834 - 17442.658: 92.3678% ( 87) 00:09:39.328 17442.658 - 17543.483: 93.2212% ( 71) 00:09:39.328 17543.483 - 17644.308: 94.0144% ( 66) 00:09:39.328 17644.308 - 17745.132: 94.7716% ( 63) 00:09:39.328 17745.132 - 17845.957: 95.7572% ( 82) 00:09:39.328 17845.957 - 17946.782: 96.4303% ( 56) 00:09:39.329 17946.782 - 18047.606: 96.9712% ( 45) 00:09:39.329 18047.606 - 18148.431: 97.4159% ( 37) 00:09:39.329 18148.431 - 18249.255: 97.7043% ( 24) 00:09:39.329 18249.255 - 18350.080: 97.9207% ( 18) 00:09:39.329 18350.080 - 18450.905: 98.1010% ( 15) 00:09:39.329 18450.905 - 18551.729: 98.2091% ( 9) 00:09:39.329 18551.729 - 18652.554: 98.3293% ( 10) 00:09:39.329 18652.554 - 18753.378: 98.4375% ( 9) 00:09:39.329 18753.378 - 18854.203: 98.4615% ( 2) 00:09:39.329 34280.369 - 34482.018: 98.4736% ( 1) 00:09:39.329 34885.317 - 35086.966: 98.4976% ( 2) 00:09:39.329 35086.966 - 35288.615: 98.7019% ( 17) 00:09:39.329 35288.615 - 35490.265: 98.9904% ( 24) 00:09:39.329 35490.265 - 35691.914: 99.2668% ( 23) 00:09:39.329 35691.914 - 35893.563: 99.3990% ( 11) 00:09:39.329 35893.563 - 36095.212: 99.4832% ( 7) 00:09:39.329 36095.212 - 36296.862: 99.5793% ( 8) 00:09:39.329 36296.862 - 36498.511: 99.6755% ( 8) 00:09:39.329 36498.511 - 36700.160: 99.7596% ( 7) 00:09:39.329 36700.160 - 36901.809: 99.8558% ( 8) 00:09:39.329 36901.809 - 37103.458: 99.9399% ( 7) 00:09:39.329 37103.458 - 37305.108: 100.0000% ( 5) 00:09:39.329 00:09:39.329 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:39.329 ============================================================================== 00:09:39.329 Range in us Cumulative IO count 00:09:39.329 8973.391 - 9023.803: 0.0240% ( 2) 00:09:39.329 9023.803 - 9074.215: 0.0841% ( 5) 00:09:39.329 9074.215 - 9124.628: 0.1803% ( 8) 00:09:39.329 9124.628 - 9175.040: 0.3005% ( 10) 00:09:39.329 9175.040 - 9225.452: 0.4207% ( 10) 00:09:39.329 9225.452 - 9275.865: 0.5649% ( 12) 00:09:39.329 9275.865 - 9326.277: 0.6611% ( 8) 00:09:39.329 9326.277 - 9376.689: 0.7812% ( 10) 00:09:39.329 9376.689 - 9427.102: 0.9976% ( 18) 00:09:39.329 9427.102 - 9477.514: 1.0697% ( 6) 00:09:39.329 9477.514 - 9527.926: 1.1298% ( 5) 00:09:39.329 9527.926 - 9578.338: 1.1899% ( 5) 00:09:39.329 9578.338 - 9628.751: 1.2740% ( 7) 00:09:39.329 9628.751 - 9679.163: 1.3462% ( 6) 00:09:39.329 9679.163 - 9729.575: 1.3942% ( 4) 00:09:39.329 9729.575 - 9779.988: 1.4183% ( 2) 00:09:39.329 9779.988 - 9830.400: 1.4543% ( 3) 00:09:39.329 9830.400 - 9880.812: 1.5024% ( 4) 00:09:39.329 9880.812 - 9931.225: 1.5745% ( 6) 00:09:39.329 9931.225 - 9981.637: 1.6827% ( 9) 00:09:39.329 9981.637 - 10032.049: 1.7428% ( 5) 00:09:39.329 10032.049 - 10082.462: 1.8389% ( 8) 00:09:39.329 10082.462 - 10132.874: 1.8990% ( 5) 00:09:39.329 10132.874 - 10183.286: 2.0192% ( 10) 00:09:39.329 10183.286 - 10233.698: 2.0913% ( 6) 00:09:39.329 10233.698 - 10284.111: 2.1755% ( 7) 00:09:39.329 10284.111 - 10334.523: 2.2596% ( 7) 00:09:39.329 10334.523 - 10384.935: 2.3438% ( 7) 00:09:39.329 10384.935 - 10435.348: 2.4399% ( 8) 00:09:39.329 10435.348 - 10485.760: 2.5481% ( 9) 00:09:39.329 10485.760 - 10536.172: 2.8005% ( 21) 00:09:39.329 10536.172 - 10586.585: 2.9447% ( 12) 00:09:39.329 10586.585 - 10636.997: 3.0769% ( 11) 00:09:39.329 10636.997 - 10687.409: 3.2091% ( 11) 00:09:39.329 10687.409 - 10737.822: 3.3173% ( 9) 00:09:39.329 10737.822 - 10788.234: 3.4856% ( 14) 00:09:39.329 10788.234 - 10838.646: 3.6178% ( 11) 00:09:39.329 10838.646 - 10889.058: 3.7139% ( 8) 00:09:39.329 10889.058 - 10939.471: 3.7981% ( 7) 00:09:39.329 10939.471 - 10989.883: 3.8702% ( 6) 00:09:39.329 10989.883 - 11040.295: 3.9663% ( 8) 00:09:39.329 11040.295 - 11090.708: 4.0385% ( 6) 00:09:39.329 11090.708 - 11141.120: 4.1346% ( 8) 00:09:39.329 11141.120 - 11191.532: 4.2308% ( 8) 00:09:39.329 11191.532 - 11241.945: 4.3269% ( 8) 00:09:39.329 11241.945 - 11292.357: 4.4231% ( 8) 00:09:39.329 11292.357 - 11342.769: 4.5192% ( 8) 00:09:39.329 11342.769 - 11393.182: 4.6274% ( 9) 00:09:39.329 11393.182 - 11443.594: 4.6875% ( 5) 00:09:39.329 11443.594 - 11494.006: 4.7236% ( 3) 00:09:39.329 11494.006 - 11544.418: 4.8197% ( 8) 00:09:39.329 11544.418 - 11594.831: 4.9038% ( 7) 00:09:39.329 11594.831 - 11645.243: 5.0120% ( 9) 00:09:39.329 11645.243 - 11695.655: 5.0962% ( 7) 00:09:39.329 11695.655 - 11746.068: 5.2163% ( 10) 00:09:39.329 11746.068 - 11796.480: 5.3486% ( 11) 00:09:39.329 11796.480 - 11846.892: 5.4327% ( 7) 00:09:39.329 11846.892 - 11897.305: 5.6010% ( 14) 00:09:39.329 11897.305 - 11947.717: 5.7212% ( 10) 00:09:39.329 11947.717 - 11998.129: 5.8534% ( 11) 00:09:39.329 11998.129 - 12048.542: 5.9976% ( 12) 00:09:39.329 12048.542 - 12098.954: 6.1058% ( 9) 00:09:39.329 12098.954 - 12149.366: 6.3822% ( 23) 00:09:39.329 12149.366 - 12199.778: 6.6226% ( 20) 00:09:39.329 12199.778 - 12250.191: 6.9351% ( 26) 00:09:39.329 12250.191 - 12300.603: 7.0433% ( 9) 00:09:39.329 12300.603 - 12351.015: 7.1154% ( 6) 00:09:39.329 12351.015 - 12401.428: 7.1875% ( 6) 00:09:39.329 12401.428 - 12451.840: 7.2837% ( 8) 00:09:39.329 12451.840 - 12502.252: 7.3558% ( 6) 00:09:39.329 12502.252 - 12552.665: 7.4159% ( 5) 00:09:39.329 12552.665 - 12603.077: 7.4760% ( 5) 00:09:39.329 12603.077 - 12653.489: 7.5120% ( 3) 00:09:39.329 12653.489 - 12703.902: 7.5481% ( 3) 00:09:39.329 12703.902 - 12754.314: 7.5721% ( 2) 00:09:39.329 12754.314 - 12804.726: 7.5841% ( 1) 00:09:39.329 12855.138 - 12905.551: 7.5962% ( 1) 00:09:39.329 12905.551 - 13006.375: 7.6803% ( 7) 00:09:39.329 13006.375 - 13107.200: 7.8486% ( 14) 00:09:39.329 13107.200 - 13208.025: 8.0769% ( 19) 00:09:39.329 13208.025 - 13308.849: 8.4014% ( 27) 00:09:39.329 13308.849 - 13409.674: 9.5793% ( 98) 00:09:39.329 13409.674 - 13510.498: 10.3606% ( 65) 00:09:39.329 13510.498 - 13611.323: 11.1298% ( 64) 00:09:39.329 13611.323 - 13712.148: 11.7548% ( 52) 00:09:39.329 13712.148 - 13812.972: 12.7043% ( 79) 00:09:39.329 13812.972 - 13913.797: 13.6538% ( 79) 00:09:39.329 13913.797 - 14014.622: 14.9880% ( 111) 00:09:39.329 14014.622 - 14115.446: 16.3341% ( 112) 00:09:39.329 14115.446 - 14216.271: 17.9688% ( 136) 00:09:39.329 14216.271 - 14317.095: 19.5793% ( 134) 00:09:39.329 14317.095 - 14417.920: 21.1899% ( 134) 00:09:39.329 14417.920 - 14518.745: 22.8726% ( 140) 00:09:39.329 14518.745 - 14619.569: 25.1322% ( 188) 00:09:39.329 14619.569 - 14720.394: 27.9447% ( 234) 00:09:39.329 14720.394 - 14821.218: 30.7933% ( 237) 00:09:39.329 14821.218 - 14922.043: 33.4014% ( 217) 00:09:39.329 14922.043 - 15022.868: 36.8750% ( 289) 00:09:39.329 15022.868 - 15123.692: 39.4591% ( 215) 00:09:39.329 15123.692 - 15224.517: 41.8990% ( 203) 00:09:39.329 15224.517 - 15325.342: 44.5312% ( 219) 00:09:39.329 15325.342 - 15426.166: 47.4639% ( 244) 00:09:39.329 15426.166 - 15526.991: 50.8894% ( 285) 00:09:39.329 15526.991 - 15627.815: 54.0625% ( 264) 00:09:39.329 15627.815 - 15728.640: 57.4279% ( 280) 00:09:39.329 15728.640 - 15829.465: 60.8173% ( 282) 00:09:39.329 15829.465 - 15930.289: 64.1587% ( 278) 00:09:39.329 15930.289 - 16031.114: 67.0673% ( 242) 00:09:39.329 16031.114 - 16131.938: 69.8077% ( 228) 00:09:39.329 16131.938 - 16232.763: 72.6923% ( 240) 00:09:39.329 16232.763 - 16333.588: 75.4327% ( 228) 00:09:39.329 16333.588 - 16434.412: 78.2692% ( 236) 00:09:39.329 16434.412 - 16535.237: 80.4207% ( 179) 00:09:39.329 16535.237 - 16636.062: 82.5120% ( 174) 00:09:39.329 16636.062 - 16736.886: 84.0865% ( 131) 00:09:39.329 16736.886 - 16837.711: 85.5529% ( 122) 00:09:39.329 16837.711 - 16938.535: 86.8510% ( 108) 00:09:39.329 16938.535 - 17039.360: 88.3293% ( 123) 00:09:39.329 17039.360 - 17140.185: 89.4832% ( 96) 00:09:39.329 17140.185 - 17241.009: 90.5288% ( 87) 00:09:39.329 17241.009 - 17341.834: 91.4062% ( 73) 00:09:39.329 17341.834 - 17442.658: 92.3077% ( 75) 00:09:39.329 17442.658 - 17543.483: 93.0769% ( 64) 00:09:39.329 17543.483 - 17644.308: 93.5938% ( 43) 00:09:39.329 17644.308 - 17745.132: 94.0264% ( 36) 00:09:39.329 17745.132 - 17845.957: 94.3750% ( 29) 00:09:39.329 17845.957 - 17946.782: 94.6875% ( 26) 00:09:39.329 17946.782 - 18047.606: 94.9399% ( 21) 00:09:39.329 18047.606 - 18148.431: 95.2163% ( 23) 00:09:39.329 18148.431 - 18249.255: 95.4447% ( 19) 00:09:39.329 18249.255 - 18350.080: 95.7572% ( 26) 00:09:39.329 18350.080 - 18450.905: 96.0337% ( 23) 00:09:39.329 18450.905 - 18551.729: 96.2260% ( 16) 00:09:39.329 18551.729 - 18652.554: 96.5024% ( 23) 00:09:39.329 18652.554 - 18753.378: 96.7308% ( 19) 00:09:39.329 18753.378 - 18854.203: 96.8870% ( 13) 00:09:39.329 18854.203 - 18955.028: 97.0673% ( 15) 00:09:39.329 18955.028 - 19055.852: 97.2356% ( 14) 00:09:39.329 19055.852 - 19156.677: 97.4279% ( 16) 00:09:39.329 19156.677 - 19257.502: 97.6803% ( 21) 00:09:39.329 19257.502 - 19358.326: 97.9808% ( 25) 00:09:39.329 19358.326 - 19459.151: 98.1250% ( 12) 00:09:39.329 19459.151 - 19559.975: 98.2212% ( 8) 00:09:39.329 19559.975 - 19660.800: 98.2812% ( 5) 00:09:39.329 19660.800 - 19761.625: 98.3534% ( 6) 00:09:39.329 19761.625 - 19862.449: 98.4255% ( 6) 00:09:39.329 19862.449 - 19963.274: 98.4495% ( 2) 00:09:39.329 20064.098 - 20164.923: 98.4615% ( 1) 00:09:39.329 33675.422 - 33877.071: 98.4856% ( 2) 00:09:39.329 33877.071 - 34078.720: 98.5697% ( 7) 00:09:39.329 34078.720 - 34280.369: 98.6538% ( 7) 00:09:39.329 34280.369 - 34482.018: 98.7380% ( 7) 00:09:39.329 34482.018 - 34683.668: 98.8221% ( 7) 00:09:39.329 34683.668 - 34885.317: 98.8942% ( 6) 00:09:39.329 34885.317 - 35086.966: 98.9784% ( 7) 00:09:39.329 35086.966 - 35288.615: 99.0745% ( 8) 00:09:39.329 35288.615 - 35490.265: 99.1466% ( 6) 00:09:39.329 35490.265 - 35691.914: 99.2308% ( 7) 00:09:39.329 35691.914 - 35893.563: 99.3149% ( 7) 00:09:39.329 35893.563 - 36095.212: 99.3870% ( 6) 00:09:39.329 36095.212 - 36296.862: 99.4712% ( 7) 00:09:39.329 36296.862 - 36498.511: 99.5553% ( 7) 00:09:39.329 36498.511 - 36700.160: 99.6394% ( 7) 00:09:39.329 36700.160 - 36901.809: 99.7356% ( 8) 00:09:39.329 36901.809 - 37103.458: 99.8197% ( 7) 00:09:39.329 37103.458 - 37305.108: 99.9038% ( 7) 00:09:39.329 37305.108 - 37506.757: 99.9880% ( 7) 00:09:39.329 37506.757 - 37708.406: 100.0000% ( 1) 00:09:39.329 00:09:39.329 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:39.329 ============================================================================== 00:09:39.329 Range in us Cumulative IO count 00:09:39.329 7914.732 - 7965.145: 0.0120% ( 1) 00:09:39.329 8015.557 - 8065.969: 0.1202% ( 9) 00:09:39.329 8065.969 - 8116.382: 0.3365% ( 18) 00:09:39.330 8116.382 - 8166.794: 0.4327% ( 8) 00:09:39.330 8166.794 - 8217.206: 0.6611% ( 19) 00:09:39.330 8217.206 - 8267.618: 0.9255% ( 22) 00:09:39.330 8267.618 - 8318.031: 1.0697% ( 12) 00:09:39.330 8318.031 - 8368.443: 1.1058% ( 3) 00:09:39.330 8368.443 - 8418.855: 1.1899% ( 7) 00:09:39.330 8418.855 - 8469.268: 1.2500% ( 5) 00:09:39.330 8469.268 - 8519.680: 1.2740% ( 2) 00:09:39.330 8519.680 - 8570.092: 1.3341% ( 5) 00:09:39.330 8570.092 - 8620.505: 1.4183% ( 7) 00:09:39.330 8620.505 - 8670.917: 1.4663% ( 4) 00:09:39.330 8670.917 - 8721.329: 1.4784% ( 1) 00:09:39.330 8771.742 - 8822.154: 1.4904% ( 1) 00:09:39.330 8872.566 - 8922.978: 1.5024% ( 1) 00:09:39.330 9225.452 - 9275.865: 1.5264% ( 2) 00:09:39.330 9326.277 - 9376.689: 1.6106% ( 7) 00:09:39.330 9376.689 - 9427.102: 1.6707% ( 5) 00:09:39.330 9427.102 - 9477.514: 1.6827% ( 1) 00:09:39.330 9527.926 - 9578.338: 1.6947% ( 1) 00:09:39.330 9679.163 - 9729.575: 1.7067% ( 1) 00:09:39.330 9729.575 - 9779.988: 1.7428% ( 3) 00:09:39.330 9779.988 - 9830.400: 1.7548% ( 1) 00:09:39.330 9830.400 - 9880.812: 1.8269% ( 6) 00:09:39.330 9931.225 - 9981.637: 1.9471% ( 10) 00:09:39.330 9981.637 - 10032.049: 2.0793% ( 11) 00:09:39.330 10032.049 - 10082.462: 2.1635% ( 7) 00:09:39.330 10082.462 - 10132.874: 2.1995% ( 3) 00:09:39.330 10132.874 - 10183.286: 2.2476% ( 4) 00:09:39.330 10183.286 - 10233.698: 2.2957% ( 4) 00:09:39.330 10233.698 - 10284.111: 2.3438% ( 4) 00:09:39.330 10284.111 - 10334.523: 2.6322% ( 24) 00:09:39.330 10334.523 - 10384.935: 2.7163% ( 7) 00:09:39.330 10384.935 - 10435.348: 2.7644% ( 4) 00:09:39.330 10435.348 - 10485.760: 2.8486% ( 7) 00:09:39.330 10485.760 - 10536.172: 2.9447% ( 8) 00:09:39.330 10536.172 - 10586.585: 3.0168% ( 6) 00:09:39.330 10586.585 - 10636.997: 3.0889% ( 6) 00:09:39.330 10636.997 - 10687.409: 3.3413% ( 21) 00:09:39.330 10687.409 - 10737.822: 3.5577% ( 18) 00:09:39.330 10737.822 - 10788.234: 3.7861% ( 19) 00:09:39.330 10788.234 - 10838.646: 3.8341% ( 4) 00:09:39.330 10838.646 - 10889.058: 3.9062% ( 6) 00:09:39.330 10889.058 - 10939.471: 3.9904% ( 7) 00:09:39.330 10939.471 - 10989.883: 4.0745% ( 7) 00:09:39.330 10989.883 - 11040.295: 4.2788% ( 17) 00:09:39.330 11040.295 - 11090.708: 4.5553% ( 23) 00:09:39.330 11090.708 - 11141.120: 4.9399% ( 32) 00:09:39.330 11141.120 - 11191.532: 5.4207% ( 40) 00:09:39.330 11191.532 - 11241.945: 5.8774% ( 38) 00:09:39.330 11241.945 - 11292.357: 5.9495% ( 6) 00:09:39.330 11292.357 - 11342.769: 6.0817% ( 11) 00:09:39.330 11342.769 - 11393.182: 6.2620% ( 15) 00:09:39.330 11393.182 - 11443.594: 6.2861% ( 2) 00:09:39.330 11443.594 - 11494.006: 6.3341% ( 4) 00:09:39.330 11494.006 - 11544.418: 6.3582% ( 2) 00:09:39.330 11544.418 - 11594.831: 6.3822% ( 2) 00:09:39.330 11594.831 - 11645.243: 6.4423% ( 5) 00:09:39.330 11645.243 - 11695.655: 6.5024% ( 5) 00:09:39.330 11695.655 - 11746.068: 6.6106% ( 9) 00:09:39.330 11746.068 - 11796.480: 6.6466% ( 3) 00:09:39.330 11796.480 - 11846.892: 6.6947% ( 4) 00:09:39.330 11846.892 - 11897.305: 6.7308% ( 3) 00:09:39.330 11897.305 - 11947.717: 6.8029% ( 6) 00:09:39.330 11947.717 - 11998.129: 6.8269% ( 2) 00:09:39.330 11998.129 - 12048.542: 7.0673% ( 20) 00:09:39.330 12048.542 - 12098.954: 7.3558% ( 24) 00:09:39.330 12098.954 - 12149.366: 7.5240% ( 14) 00:09:39.330 12149.366 - 12199.778: 7.6442% ( 10) 00:09:39.330 12199.778 - 12250.191: 7.6803% ( 3) 00:09:39.330 12451.840 - 12502.252: 7.6923% ( 1) 00:09:39.330 12603.077 - 12653.489: 7.7043% ( 1) 00:09:39.330 12653.489 - 12703.902: 7.7163% ( 1) 00:09:39.330 12703.902 - 12754.314: 7.7284% ( 1) 00:09:39.330 12754.314 - 12804.726: 7.8005% ( 6) 00:09:39.330 12804.726 - 12855.138: 7.8606% ( 5) 00:09:39.330 12905.551 - 13006.375: 8.0529% ( 16) 00:09:39.330 13006.375 - 13107.200: 8.4495% ( 33) 00:09:39.330 13107.200 - 13208.025: 9.1947% ( 62) 00:09:39.330 13208.025 - 13308.849: 9.8678% ( 56) 00:09:39.330 13308.849 - 13409.674: 10.5168% ( 54) 00:09:39.330 13409.674 - 13510.498: 11.1779% ( 55) 00:09:39.330 13510.498 - 13611.323: 11.9832% ( 67) 00:09:39.330 13611.323 - 13712.148: 12.6683% ( 57) 00:09:39.330 13712.148 - 13812.972: 14.0144% ( 112) 00:09:39.330 13812.972 - 13913.797: 15.5288% ( 126) 00:09:39.330 13913.797 - 14014.622: 17.6923% ( 180) 00:09:39.330 14014.622 - 14115.446: 19.7716% ( 173) 00:09:39.330 14115.446 - 14216.271: 21.3702% ( 133) 00:09:39.330 14216.271 - 14317.095: 23.0529% ( 140) 00:09:39.330 14317.095 - 14417.920: 24.6875% ( 136) 00:09:39.330 14417.920 - 14518.745: 26.4904% ( 150) 00:09:39.330 14518.745 - 14619.569: 28.9904% ( 208) 00:09:39.330 14619.569 - 14720.394: 31.1659% ( 181) 00:09:39.330 14720.394 - 14821.218: 33.6538% ( 207) 00:09:39.330 14821.218 - 14922.043: 36.2139% ( 213) 00:09:39.330 14922.043 - 15022.868: 38.5938% ( 198) 00:09:39.330 15022.868 - 15123.692: 40.7332% ( 178) 00:09:39.330 15123.692 - 15224.517: 43.6298% ( 241) 00:09:39.330 15224.517 - 15325.342: 46.4062% ( 231) 00:09:39.330 15325.342 - 15426.166: 49.5673% ( 263) 00:09:39.330 15426.166 - 15526.991: 52.2957% ( 227) 00:09:39.330 15526.991 - 15627.815: 54.8558% ( 213) 00:09:39.330 15627.815 - 15728.640: 57.8846% ( 252) 00:09:39.330 15728.640 - 15829.465: 60.3125% ( 202) 00:09:39.330 15829.465 - 15930.289: 62.8005% ( 207) 00:09:39.330 15930.289 - 16031.114: 65.1683% ( 197) 00:09:39.330 16031.114 - 16131.938: 67.7644% ( 216) 00:09:39.330 16131.938 - 16232.763: 70.3606% ( 216) 00:09:39.330 16232.763 - 16333.588: 72.4399% ( 173) 00:09:39.330 16333.588 - 16434.412: 74.5312% ( 174) 00:09:39.330 16434.412 - 16535.237: 76.8149% ( 190) 00:09:39.330 16535.237 - 16636.062: 78.4856% ( 139) 00:09:39.330 16636.062 - 16736.886: 80.2885% ( 150) 00:09:39.330 16736.886 - 16837.711: 81.9832% ( 141) 00:09:39.330 16837.711 - 16938.535: 83.3053% ( 110) 00:09:39.330 16938.535 - 17039.360: 84.8918% ( 132) 00:09:39.330 17039.360 - 17140.185: 86.2019% ( 109) 00:09:39.330 17140.185 - 17241.009: 87.4159% ( 101) 00:09:39.330 17241.009 - 17341.834: 88.5096% ( 91) 00:09:39.330 17341.834 - 17442.658: 89.6034% ( 91) 00:09:39.330 17442.658 - 17543.483: 90.5168% ( 76) 00:09:39.330 17543.483 - 17644.308: 91.2260% ( 59) 00:09:39.330 17644.308 - 17745.132: 91.8750% ( 54) 00:09:39.330 17745.132 - 17845.957: 92.5601% ( 57) 00:09:39.330 17845.957 - 17946.782: 93.1490% ( 49) 00:09:39.330 17946.782 - 18047.606: 93.7740% ( 52) 00:09:39.330 18047.606 - 18148.431: 94.3870% ( 51) 00:09:39.330 18148.431 - 18249.255: 94.8438% ( 38) 00:09:39.330 18249.255 - 18350.080: 95.1202% ( 23) 00:09:39.330 18350.080 - 18450.905: 95.4567% ( 28) 00:09:39.330 18450.905 - 18551.729: 95.6611% ( 17) 00:09:39.330 18551.729 - 18652.554: 95.9736% ( 26) 00:09:39.330 18652.554 - 18753.378: 96.2019% ( 19) 00:09:39.330 18753.378 - 18854.203: 96.4663% ( 22) 00:09:39.330 18854.203 - 18955.028: 96.7067% ( 20) 00:09:39.330 18955.028 - 19055.852: 96.9231% ( 18) 00:09:39.330 19055.852 - 19156.677: 97.0673% ( 12) 00:09:39.330 19156.677 - 19257.502: 97.1875% ( 10) 00:09:39.330 19257.502 - 19358.326: 97.2957% ( 9) 00:09:39.330 19358.326 - 19459.151: 97.4159% ( 10) 00:09:39.330 19459.151 - 19559.975: 97.5240% ( 9) 00:09:39.330 19559.975 - 19660.800: 97.5601% ( 3) 00:09:39.330 19660.800 - 19761.625: 97.6803% ( 10) 00:09:39.330 19761.625 - 19862.449: 97.7524% ( 6) 00:09:39.330 19862.449 - 19963.274: 97.8005% ( 4) 00:09:39.330 19963.274 - 20064.098: 97.8846% ( 7) 00:09:39.330 20064.098 - 20164.923: 97.9087% ( 2) 00:09:39.330 20164.923 - 20265.748: 97.9688% ( 5) 00:09:39.330 20265.748 - 20366.572: 98.0288% ( 5) 00:09:39.330 20366.572 - 20467.397: 98.0889% ( 5) 00:09:39.330 20467.397 - 20568.222: 98.1611% ( 6) 00:09:39.330 20568.222 - 20669.046: 98.2091% ( 4) 00:09:39.330 20669.046 - 20769.871: 98.2572% ( 4) 00:09:39.330 20769.871 - 20870.695: 98.3053% ( 4) 00:09:39.330 20870.695 - 20971.520: 98.3534% ( 4) 00:09:39.330 20971.520 - 21072.345: 98.4014% ( 4) 00:09:39.330 21072.345 - 21173.169: 98.4495% ( 4) 00:09:39.330 21173.169 - 21273.994: 98.4615% ( 1) 00:09:39.330 33473.772 - 33675.422: 98.4856% ( 2) 00:09:39.330 33675.422 - 33877.071: 98.5577% ( 6) 00:09:39.330 33877.071 - 34078.720: 98.6298% ( 6) 00:09:39.330 34078.720 - 34280.369: 98.7019% ( 6) 00:09:39.330 34280.369 - 34482.018: 98.7861% ( 7) 00:09:39.330 34482.018 - 34683.668: 98.8582% ( 6) 00:09:39.330 34683.668 - 34885.317: 98.9303% ( 6) 00:09:39.330 34885.317 - 35086.966: 99.0144% ( 7) 00:09:39.330 35086.966 - 35288.615: 99.1106% ( 8) 00:09:39.330 35288.615 - 35490.265: 99.1707% ( 5) 00:09:39.330 35490.265 - 35691.914: 99.2548% ( 7) 00:09:39.330 35691.914 - 35893.563: 99.3269% ( 6) 00:09:39.330 35893.563 - 36095.212: 99.3870% ( 5) 00:09:39.330 36095.212 - 36296.862: 99.4712% ( 7) 00:09:39.330 36296.862 - 36498.511: 99.5553% ( 7) 00:09:39.330 36498.511 - 36700.160: 99.6274% ( 6) 00:09:39.330 36700.160 - 36901.809: 99.7236% ( 8) 00:09:39.330 36901.809 - 37103.458: 99.7837% ( 5) 00:09:39.330 37103.458 - 37305.108: 99.8798% ( 8) 00:09:39.330 37305.108 - 37506.757: 99.9399% ( 5) 00:09:39.330 37506.757 - 37708.406: 100.0000% ( 5) 00:09:39.330 00:09:39.330 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:39.330 ============================================================================== 00:09:39.330 Range in us Cumulative IO count 00:09:39.330 7662.671 - 7713.083: 0.0481% ( 4) 00:09:39.330 7713.083 - 7763.495: 0.1923% ( 12) 00:09:39.330 7763.495 - 7813.908: 0.4567% ( 22) 00:09:39.330 7813.908 - 7864.320: 0.6971% ( 20) 00:09:39.330 7864.320 - 7914.732: 0.8774% ( 15) 00:09:39.330 7914.732 - 7965.145: 0.9615% ( 7) 00:09:39.330 7965.145 - 8015.557: 1.2139% ( 21) 00:09:39.330 8015.557 - 8065.969: 1.3702% ( 13) 00:09:39.330 8065.969 - 8116.382: 1.4303% ( 5) 00:09:39.330 8116.382 - 8166.794: 1.5024% ( 6) 00:09:39.330 8166.794 - 8217.206: 1.5264% ( 2) 00:09:39.330 8217.206 - 8267.618: 1.5385% ( 1) 00:09:39.330 10132.874 - 10183.286: 1.5745% ( 3) 00:09:39.330 10183.286 - 10233.698: 1.6466% ( 6) 00:09:39.330 10233.698 - 10284.111: 1.7428% ( 8) 00:09:39.331 10284.111 - 10334.523: 1.7909% ( 4) 00:09:39.331 10334.523 - 10384.935: 1.8750% ( 7) 00:09:39.331 10384.935 - 10435.348: 1.9471% ( 6) 00:09:39.331 10435.348 - 10485.760: 2.0072% ( 5) 00:09:39.331 10485.760 - 10536.172: 2.1995% ( 16) 00:09:39.331 10536.172 - 10586.585: 2.3438% ( 12) 00:09:39.331 10586.585 - 10636.997: 2.5120% ( 14) 00:09:39.331 10636.997 - 10687.409: 2.7764% ( 22) 00:09:39.331 10687.409 - 10737.822: 3.0889% ( 26) 00:09:39.331 10737.822 - 10788.234: 3.1851% ( 8) 00:09:39.331 10788.234 - 10838.646: 3.3293% ( 12) 00:09:39.331 10838.646 - 10889.058: 3.6418% ( 26) 00:09:39.331 10889.058 - 10939.471: 4.4591% ( 68) 00:09:39.331 10939.471 - 10989.883: 4.6274% ( 14) 00:09:39.331 10989.883 - 11040.295: 4.6635% ( 3) 00:09:39.331 11040.295 - 11090.708: 4.7115% ( 4) 00:09:39.331 11090.708 - 11141.120: 4.7957% ( 7) 00:09:39.331 11141.120 - 11191.532: 4.8798% ( 7) 00:09:39.331 11191.532 - 11241.945: 5.0120% ( 11) 00:09:39.331 11241.945 - 11292.357: 5.1562% ( 12) 00:09:39.331 11292.357 - 11342.769: 5.2524% ( 8) 00:09:39.331 11342.769 - 11393.182: 5.3365% ( 7) 00:09:39.331 11393.182 - 11443.594: 5.3846% ( 4) 00:09:39.331 11443.594 - 11494.006: 5.5889% ( 17) 00:09:39.331 11494.006 - 11544.418: 5.9014% ( 26) 00:09:39.331 11544.418 - 11594.831: 6.0457% ( 12) 00:09:39.331 11594.831 - 11645.243: 6.0817% ( 3) 00:09:39.331 11645.243 - 11695.655: 6.1538% ( 6) 00:09:39.331 11695.655 - 11746.068: 6.1899% ( 3) 00:09:39.331 11746.068 - 11796.480: 6.4183% ( 19) 00:09:39.331 11796.480 - 11846.892: 7.3077% ( 74) 00:09:39.331 11846.892 - 11897.305: 7.3918% ( 7) 00:09:39.331 11897.305 - 11947.717: 7.4159% ( 2) 00:09:39.331 11947.717 - 11998.129: 7.4519% ( 3) 00:09:39.331 11998.129 - 12048.542: 7.5000% ( 4) 00:09:39.331 12048.542 - 12098.954: 7.5361% ( 3) 00:09:39.331 12098.954 - 12149.366: 7.5601% ( 2) 00:09:39.331 12149.366 - 12199.778: 7.6082% ( 4) 00:09:39.331 12199.778 - 12250.191: 7.6683% ( 5) 00:09:39.331 12250.191 - 12300.603: 7.6923% ( 2) 00:09:39.331 12451.840 - 12502.252: 7.7043% ( 1) 00:09:39.331 12502.252 - 12552.665: 7.7644% ( 5) 00:09:39.331 12552.665 - 12603.077: 7.8125% ( 4) 00:09:39.331 12603.077 - 12653.489: 7.8486% ( 3) 00:09:39.331 12653.489 - 12703.902: 7.8846% ( 3) 00:09:39.331 12703.902 - 12754.314: 7.9327% ( 4) 00:09:39.331 12754.314 - 12804.726: 7.9688% ( 3) 00:09:39.331 12804.726 - 12855.138: 8.0168% ( 4) 00:09:39.331 12855.138 - 12905.551: 8.0409% ( 2) 00:09:39.331 12905.551 - 13006.375: 8.1490% ( 9) 00:09:39.331 13006.375 - 13107.200: 8.3173% ( 14) 00:09:39.331 13107.200 - 13208.025: 8.7500% ( 36) 00:09:39.331 13208.025 - 13308.849: 9.0745% ( 27) 00:09:39.331 13308.849 - 13409.674: 9.5072% ( 36) 00:09:39.331 13409.674 - 13510.498: 9.8918% ( 32) 00:09:39.331 13510.498 - 13611.323: 10.4327% ( 45) 00:09:39.331 13611.323 - 13712.148: 10.7452% ( 26) 00:09:39.331 13712.148 - 13812.972: 11.1058% ( 30) 00:09:39.331 13812.972 - 13913.797: 11.8029% ( 58) 00:09:39.331 13913.797 - 14014.622: 12.9447% ( 95) 00:09:39.331 14014.622 - 14115.446: 14.3870% ( 120) 00:09:39.331 14115.446 - 14216.271: 16.1779% ( 149) 00:09:39.331 14216.271 - 14317.095: 18.0288% ( 154) 00:09:39.331 14317.095 - 14417.920: 19.9519% ( 160) 00:09:39.331 14417.920 - 14518.745: 22.2236% ( 189) 00:09:39.331 14518.745 - 14619.569: 25.0240% ( 233) 00:09:39.331 14619.569 - 14720.394: 27.9808% ( 246) 00:09:39.331 14720.394 - 14821.218: 31.2981% ( 276) 00:09:39.331 14821.218 - 14922.043: 35.0240% ( 310) 00:09:39.331 14922.043 - 15022.868: 38.3654% ( 278) 00:09:39.331 15022.868 - 15123.692: 41.7308% ( 280) 00:09:39.331 15123.692 - 15224.517: 44.9399% ( 267) 00:09:39.331 15224.517 - 15325.342: 48.0048% ( 255) 00:09:39.331 15325.342 - 15426.166: 51.5144% ( 292) 00:09:39.331 15426.166 - 15526.991: 54.3630% ( 237) 00:09:39.331 15526.991 - 15627.815: 57.1514% ( 232) 00:09:39.331 15627.815 - 15728.640: 59.8077% ( 221) 00:09:39.331 15728.640 - 15829.465: 62.4159% ( 217) 00:09:39.331 15829.465 - 15930.289: 64.8197% ( 200) 00:09:39.331 15930.289 - 16031.114: 67.4159% ( 216) 00:09:39.331 16031.114 - 16131.938: 69.7476% ( 194) 00:09:39.331 16131.938 - 16232.763: 71.8870% ( 178) 00:09:39.331 16232.763 - 16333.588: 74.1226% ( 186) 00:09:39.331 16333.588 - 16434.412: 76.3341% ( 184) 00:09:39.331 16434.412 - 16535.237: 78.1010% ( 147) 00:09:39.331 16535.237 - 16636.062: 79.6755% ( 131) 00:09:39.331 16636.062 - 16736.886: 81.6827% ( 167) 00:09:39.331 16736.886 - 16837.711: 83.3413% ( 138) 00:09:39.331 16837.711 - 16938.535: 84.7236% ( 115) 00:09:39.331 16938.535 - 17039.360: 86.4062% ( 140) 00:09:39.331 17039.360 - 17140.185: 88.0409% ( 136) 00:09:39.331 17140.185 - 17241.009: 89.2308% ( 99) 00:09:39.331 17241.009 - 17341.834: 90.2644% ( 86) 00:09:39.331 17341.834 - 17442.658: 91.2139% ( 79) 00:09:39.331 17442.658 - 17543.483: 92.0072% ( 66) 00:09:39.331 17543.483 - 17644.308: 92.8606% ( 71) 00:09:39.331 17644.308 - 17745.132: 93.6298% ( 64) 00:09:39.331 17745.132 - 17845.957: 94.2067% ( 48) 00:09:39.331 17845.957 - 17946.782: 94.8438% ( 53) 00:09:39.331 17946.782 - 18047.606: 95.1683% ( 27) 00:09:39.331 18047.606 - 18148.431: 95.4928% ( 27) 00:09:39.331 18148.431 - 18249.255: 95.7812% ( 24) 00:09:39.331 18249.255 - 18350.080: 96.0096% ( 19) 00:09:39.331 18350.080 - 18450.905: 96.2500% ( 20) 00:09:39.331 18450.905 - 18551.729: 96.4423% ( 16) 00:09:39.331 18551.729 - 18652.554: 96.5745% ( 11) 00:09:39.331 18652.554 - 18753.378: 96.7188% ( 12) 00:09:39.331 18753.378 - 18854.203: 96.8750% ( 13) 00:09:39.331 18854.203 - 18955.028: 96.9832% ( 9) 00:09:39.331 18955.028 - 19055.852: 97.1034% ( 10) 00:09:39.331 19055.852 - 19156.677: 97.2115% ( 9) 00:09:39.331 19156.677 - 19257.502: 97.3077% ( 8) 00:09:39.331 19257.502 - 19358.326: 97.4279% ( 10) 00:09:39.331 19358.326 - 19459.151: 97.5361% ( 9) 00:09:39.331 19459.151 - 19559.975: 97.6562% ( 10) 00:09:39.331 19559.975 - 19660.800: 97.7764% ( 10) 00:09:39.331 19660.800 - 19761.625: 97.8486% ( 6) 00:09:39.331 19761.625 - 19862.449: 98.0048% ( 13) 00:09:39.331 19862.449 - 19963.274: 98.0889% ( 7) 00:09:39.331 19963.274 - 20064.098: 98.1490% ( 5) 00:09:39.331 20064.098 - 20164.923: 98.2091% ( 5) 00:09:39.331 20164.923 - 20265.748: 98.2572% ( 4) 00:09:39.331 20265.748 - 20366.572: 98.3173% ( 5) 00:09:39.331 20366.572 - 20467.397: 98.3774% ( 5) 00:09:39.331 20467.397 - 20568.222: 98.4255% ( 4) 00:09:39.331 20568.222 - 20669.046: 98.4615% ( 3) 00:09:39.331 33070.474 - 33272.123: 98.4856% ( 2) 00:09:39.331 33272.123 - 33473.772: 98.5697% ( 7) 00:09:39.331 33473.772 - 33675.422: 98.6538% ( 7) 00:09:39.331 33675.422 - 33877.071: 98.7380% ( 7) 00:09:39.331 33877.071 - 34078.720: 98.8221% ( 7) 00:09:39.331 34078.720 - 34280.369: 98.9062% ( 7) 00:09:39.331 34280.369 - 34482.018: 99.0024% ( 8) 00:09:39.331 34482.018 - 34683.668: 99.0865% ( 7) 00:09:39.331 34683.668 - 34885.317: 99.1707% ( 7) 00:09:39.331 34885.317 - 35086.966: 99.2428% ( 6) 00:09:39.331 35086.966 - 35288.615: 99.3269% ( 7) 00:09:39.331 35288.615 - 35490.265: 99.4111% ( 7) 00:09:39.331 35490.265 - 35691.914: 99.4952% ( 7) 00:09:39.331 35691.914 - 35893.563: 99.5913% ( 8) 00:09:39.331 35893.563 - 36095.212: 99.6755% ( 7) 00:09:39.331 36095.212 - 36296.862: 99.7596% ( 7) 00:09:39.331 36296.862 - 36498.511: 99.8438% ( 7) 00:09:39.331 36498.511 - 36700.160: 99.9279% ( 7) 00:09:39.331 36700.160 - 36901.809: 100.0000% ( 6) 00:09:39.331 00:09:39.331 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:39.331 ============================================================================== 00:09:39.331 Range in us Cumulative IO count 00:09:39.331 10989.883 - 11040.295: 0.0721% ( 6) 00:09:39.331 11040.295 - 11090.708: 0.1442% ( 6) 00:09:39.331 11090.708 - 11141.120: 0.2163% ( 6) 00:09:39.331 11141.120 - 11191.532: 0.4567% ( 20) 00:09:39.331 11191.532 - 11241.945: 0.5168% ( 5) 00:09:39.331 11241.945 - 11292.357: 0.6250% ( 9) 00:09:39.331 11292.357 - 11342.769: 0.7812% ( 13) 00:09:39.331 11342.769 - 11393.182: 0.9495% ( 14) 00:09:39.331 11393.182 - 11443.594: 1.2500% ( 25) 00:09:39.331 11443.594 - 11494.006: 1.6587% ( 34) 00:09:39.331 11494.006 - 11544.418: 1.7308% ( 6) 00:09:39.331 11544.418 - 11594.831: 1.8269% ( 8) 00:09:39.331 11594.831 - 11645.243: 1.8990% ( 6) 00:09:39.331 11645.243 - 11695.655: 2.0192% ( 10) 00:09:39.331 11695.655 - 11746.068: 2.1034% ( 7) 00:09:39.331 11746.068 - 11796.480: 2.2356% ( 11) 00:09:39.331 11796.480 - 11846.892: 2.3798% ( 12) 00:09:39.331 11846.892 - 11897.305: 2.5962% ( 18) 00:09:39.331 11897.305 - 11947.717: 2.7885% ( 16) 00:09:39.331 11947.717 - 11998.129: 2.9808% ( 16) 00:09:39.331 11998.129 - 12048.542: 3.2812% ( 25) 00:09:39.331 12048.542 - 12098.954: 3.4976% ( 18) 00:09:39.331 12098.954 - 12149.366: 3.7260% ( 19) 00:09:39.331 12149.366 - 12199.778: 4.0024% ( 23) 00:09:39.331 12199.778 - 12250.191: 4.7596% ( 63) 00:09:39.331 12250.191 - 12300.603: 5.1803% ( 35) 00:09:39.331 12300.603 - 12351.015: 5.3726% ( 16) 00:09:39.331 12351.015 - 12401.428: 5.7332% ( 30) 00:09:39.331 12401.428 - 12451.840: 5.9615% ( 19) 00:09:39.331 12451.840 - 12502.252: 6.1899% ( 19) 00:09:39.331 12502.252 - 12552.665: 6.3942% ( 17) 00:09:39.331 12552.665 - 12603.077: 6.6226% ( 19) 00:09:39.331 12603.077 - 12653.489: 6.8510% ( 19) 00:09:39.331 12653.489 - 12703.902: 7.1755% ( 27) 00:09:39.331 12703.902 - 12754.314: 7.5361% ( 30) 00:09:39.331 12754.314 - 12804.726: 7.8245% ( 24) 00:09:39.331 12804.726 - 12855.138: 8.0889% ( 22) 00:09:39.331 12855.138 - 12905.551: 8.4375% ( 29) 00:09:39.331 12905.551 - 13006.375: 9.1106% ( 56) 00:09:39.331 13006.375 - 13107.200: 9.9519% ( 70) 00:09:39.331 13107.200 - 13208.025: 11.0337% ( 90) 00:09:39.331 13208.025 - 13308.849: 12.0433% ( 84) 00:09:39.331 13308.849 - 13409.674: 12.9928% ( 79) 00:09:39.331 13409.674 - 13510.498: 14.5433% ( 129) 00:09:39.331 13510.498 - 13611.323: 15.4567% ( 76) 00:09:39.331 13611.323 - 13712.148: 16.3221% ( 72) 00:09:39.331 13712.148 - 13812.972: 17.7404% ( 118) 00:09:39.331 13812.972 - 13913.797: 19.1947% ( 121) 00:09:39.331 13913.797 - 14014.622: 20.8774% ( 140) 00:09:39.331 14014.622 - 14115.446: 22.3798% ( 125) 00:09:39.331 14115.446 - 14216.271: 23.7380% ( 113) 00:09:39.331 14216.271 - 14317.095: 25.3245% ( 132) 00:09:39.332 14317.095 - 14417.920: 27.8365% ( 209) 00:09:39.332 14417.920 - 14518.745: 30.3486% ( 209) 00:09:39.332 14518.745 - 14619.569: 32.2356% ( 157) 00:09:39.332 14619.569 - 14720.394: 34.3630% ( 177) 00:09:39.332 14720.394 - 14821.218: 37.2476% ( 240) 00:09:39.332 14821.218 - 14922.043: 39.7837% ( 211) 00:09:39.332 14922.043 - 15022.868: 42.0072% ( 185) 00:09:39.332 15022.868 - 15123.692: 44.2308% ( 185) 00:09:39.332 15123.692 - 15224.517: 46.5144% ( 190) 00:09:39.332 15224.517 - 15325.342: 49.1947% ( 223) 00:09:39.332 15325.342 - 15426.166: 52.1875% ( 249) 00:09:39.332 15426.166 - 15526.991: 54.6755% ( 207) 00:09:39.332 15526.991 - 15627.815: 57.8486% ( 264) 00:09:39.332 15627.815 - 15728.640: 60.1683% ( 193) 00:09:39.332 15728.640 - 15829.465: 62.3678% ( 183) 00:09:39.332 15829.465 - 15930.289: 64.5312% ( 180) 00:09:39.332 15930.289 - 16031.114: 66.8990% ( 197) 00:09:39.332 16031.114 - 16131.938: 69.8317% ( 244) 00:09:39.332 16131.938 - 16232.763: 72.5361% ( 225) 00:09:39.332 16232.763 - 16333.588: 74.8077% ( 189) 00:09:39.332 16333.588 - 16434.412: 76.8630% ( 171) 00:09:39.332 16434.412 - 16535.237: 78.7620% ( 158) 00:09:39.332 16535.237 - 16636.062: 80.6490% ( 157) 00:09:39.332 16636.062 - 16736.886: 82.9567% ( 192) 00:09:39.332 16736.886 - 16837.711: 84.8918% ( 161) 00:09:39.332 16837.711 - 16938.535: 86.2861% ( 116) 00:09:39.332 16938.535 - 17039.360: 87.8245% ( 128) 00:09:39.332 17039.360 - 17140.185: 89.1587% ( 111) 00:09:39.332 17140.185 - 17241.009: 90.4808% ( 110) 00:09:39.332 17241.009 - 17341.834: 91.4663% ( 82) 00:09:39.332 17341.834 - 17442.658: 92.4159% ( 79) 00:09:39.332 17442.658 - 17543.483: 93.3774% ( 80) 00:09:39.332 17543.483 - 17644.308: 94.6274% ( 104) 00:09:39.332 17644.308 - 17745.132: 95.4447% ( 68) 00:09:39.332 17745.132 - 17845.957: 96.0216% ( 48) 00:09:39.332 17845.957 - 17946.782: 96.4784% ( 38) 00:09:39.332 17946.782 - 18047.606: 96.9111% ( 36) 00:09:39.332 18047.606 - 18148.431: 97.2236% ( 26) 00:09:39.332 18148.431 - 18249.255: 97.4279% ( 17) 00:09:39.332 18249.255 - 18350.080: 97.5240% ( 8) 00:09:39.332 18350.080 - 18450.905: 97.5962% ( 6) 00:09:39.332 18450.905 - 18551.729: 97.6562% ( 5) 00:09:39.332 18551.729 - 18652.554: 97.7043% ( 4) 00:09:39.332 18652.554 - 18753.378: 97.7644% ( 5) 00:09:39.332 18753.378 - 18854.203: 97.8125% ( 4) 00:09:39.332 18854.203 - 18955.028: 97.8726% ( 5) 00:09:39.332 18955.028 - 19055.852: 97.9207% ( 4) 00:09:39.332 19055.852 - 19156.677: 97.9808% ( 5) 00:09:39.332 19156.677 - 19257.502: 98.0288% ( 4) 00:09:39.332 19257.502 - 19358.326: 98.0889% ( 5) 00:09:39.332 19358.326 - 19459.151: 98.1370% ( 4) 00:09:39.332 19459.151 - 19559.975: 98.1971% ( 5) 00:09:39.332 19559.975 - 19660.800: 98.2452% ( 4) 00:09:39.332 19660.800 - 19761.625: 98.2933% ( 4) 00:09:39.332 19761.625 - 19862.449: 98.3534% ( 5) 00:09:39.332 19862.449 - 19963.274: 98.4014% ( 4) 00:09:39.332 19963.274 - 20064.098: 98.4495% ( 4) 00:09:39.332 20064.098 - 20164.923: 98.4615% ( 1) 00:09:39.332 35288.615 - 35490.265: 98.4976% ( 3) 00:09:39.332 35490.265 - 35691.914: 98.5817% ( 7) 00:09:39.332 35691.914 - 35893.563: 98.6659% ( 7) 00:09:39.332 35893.563 - 36095.212: 98.7500% ( 7) 00:09:39.332 36095.212 - 36296.862: 98.8341% ( 7) 00:09:39.332 36296.862 - 36498.511: 98.9183% ( 7) 00:09:39.332 36498.511 - 36700.160: 99.0024% ( 7) 00:09:39.332 36700.160 - 36901.809: 99.0865% ( 7) 00:09:39.332 36901.809 - 37103.458: 99.1707% ( 7) 00:09:39.332 37103.458 - 37305.108: 99.2548% ( 7) 00:09:39.332 37305.108 - 37506.757: 99.3269% ( 6) 00:09:39.332 37506.757 - 37708.406: 99.4111% ( 7) 00:09:39.332 37708.406 - 37910.055: 99.4952% ( 7) 00:09:39.332 37910.055 - 38111.705: 99.5793% ( 7) 00:09:39.332 38111.705 - 38313.354: 99.6635% ( 7) 00:09:39.332 38313.354 - 38515.003: 99.7596% ( 8) 00:09:39.332 38515.003 - 38716.652: 99.8438% ( 7) 00:09:39.332 38716.652 - 38918.302: 99.9399% ( 8) 00:09:39.332 38918.302 - 39119.951: 100.0000% ( 5) 00:09:39.332 00:09:39.332 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:39.332 ============================================================================== 00:09:39.332 Range in us Cumulative IO count 00:09:39.332 9880.812 - 9931.225: 0.0118% ( 1) 00:09:39.332 10082.462 - 10132.874: 0.0237% ( 1) 00:09:39.332 10132.874 - 10183.286: 0.0473% ( 2) 00:09:39.332 10183.286 - 10233.698: 0.1065% ( 5) 00:09:39.332 10233.698 - 10284.111: 0.1539% ( 4) 00:09:39.332 10284.111 - 10334.523: 0.2012% ( 4) 00:09:39.332 10334.523 - 10384.935: 0.2604% ( 5) 00:09:39.332 10384.935 - 10435.348: 0.3196% ( 5) 00:09:39.332 10435.348 - 10485.760: 0.3788% ( 5) 00:09:39.332 10485.760 - 10536.172: 0.4380% ( 5) 00:09:39.332 10536.172 - 10586.585: 0.4972% ( 5) 00:09:39.332 10586.585 - 10636.997: 0.5563% ( 5) 00:09:39.332 10636.997 - 10687.409: 0.6037% ( 4) 00:09:39.332 10687.409 - 10737.822: 0.6629% ( 5) 00:09:39.332 10737.822 - 10788.234: 0.8286% ( 14) 00:09:39.332 10788.234 - 10838.646: 0.9233% ( 8) 00:09:39.332 10838.646 - 10889.058: 0.9825% ( 5) 00:09:39.332 10889.058 - 10939.471: 1.0298% ( 4) 00:09:39.332 10939.471 - 10989.883: 1.1009% ( 6) 00:09:39.332 10989.883 - 11040.295: 1.1482% ( 4) 00:09:39.332 11040.295 - 11090.708: 1.2074% ( 5) 00:09:39.332 11090.708 - 11141.120: 1.2666% ( 5) 00:09:39.332 11141.120 - 11191.532: 1.4205% ( 13) 00:09:39.332 11191.532 - 11241.945: 1.5033% ( 7) 00:09:39.332 11241.945 - 11292.357: 1.6454% ( 12) 00:09:39.332 11292.357 - 11342.769: 1.7992% ( 13) 00:09:39.332 11342.769 - 11393.182: 1.9886% ( 16) 00:09:39.332 11393.182 - 11443.594: 2.1544% ( 14) 00:09:39.332 11443.594 - 11494.006: 2.3319% ( 15) 00:09:39.332 11494.006 - 11544.418: 2.5095% ( 15) 00:09:39.332 11544.418 - 11594.831: 2.6989% ( 16) 00:09:39.332 11594.831 - 11645.243: 2.8764% ( 15) 00:09:39.332 11645.243 - 11695.655: 3.0303% ( 13) 00:09:39.332 11695.655 - 11746.068: 3.3381% ( 26) 00:09:39.332 11746.068 - 11796.480: 3.5866% ( 21) 00:09:39.332 11796.480 - 11846.892: 3.7405% ( 13) 00:09:39.332 11846.892 - 11897.305: 3.8707% ( 11) 00:09:39.332 11897.305 - 11947.717: 4.0246% ( 13) 00:09:39.332 11947.717 - 11998.129: 4.3087% ( 24) 00:09:39.332 11998.129 - 12048.542: 4.7112% ( 34) 00:09:39.332 12048.542 - 12098.954: 5.1847% ( 40) 00:09:39.332 12098.954 - 12149.366: 5.4451% ( 22) 00:09:39.332 12149.366 - 12199.778: 5.9896% ( 46) 00:09:39.332 12199.778 - 12250.191: 6.8655% ( 74) 00:09:39.332 12250.191 - 12300.603: 7.1851% ( 27) 00:09:39.332 12300.603 - 12351.015: 7.4219% ( 20) 00:09:39.332 12351.015 - 12401.428: 7.6349% ( 18) 00:09:39.332 12401.428 - 12451.840: 7.7770% ( 12) 00:09:39.332 12451.840 - 12502.252: 8.0137% ( 20) 00:09:39.332 12502.252 - 12552.665: 8.1795% ( 14) 00:09:39.332 12552.665 - 12603.077: 8.3807% ( 17) 00:09:39.332 12603.077 - 12653.489: 8.6648% ( 24) 00:09:39.332 12653.489 - 12703.902: 8.9134% ( 21) 00:09:39.332 12703.902 - 12754.314: 9.5762% ( 56) 00:09:39.332 12754.314 - 12804.726: 9.9432% ( 31) 00:09:39.332 12804.726 - 12855.138: 10.0852% ( 12) 00:09:39.332 12855.138 - 12905.551: 10.3101% ( 19) 00:09:39.332 12905.551 - 13006.375: 10.8783% ( 48) 00:09:39.332 13006.375 - 13107.200: 11.1387% ( 22) 00:09:39.332 13107.200 - 13208.025: 11.3873% ( 21) 00:09:39.332 13208.025 - 13308.849: 11.7306% ( 29) 00:09:39.332 13308.849 - 13409.674: 12.7604% ( 87) 00:09:39.332 13409.674 - 13510.498: 14.1690% ( 119) 00:09:39.332 13510.498 - 13611.323: 14.8438% ( 57) 00:09:39.332 13611.323 - 13712.148: 15.6605% ( 69) 00:09:39.332 13712.148 - 13812.972: 16.4181% ( 64) 00:09:39.332 13812.972 - 13913.797: 17.6136% ( 101) 00:09:39.332 13913.797 - 14014.622: 18.6908% ( 91) 00:09:39.332 14014.622 - 14115.446: 19.9100% ( 103) 00:09:39.332 14115.446 - 14216.271: 20.9280% ( 86) 00:09:39.332 14216.271 - 14317.095: 22.1117% ( 100) 00:09:39.332 14317.095 - 14417.920: 23.9702% ( 157) 00:09:39.332 14417.920 - 14518.745: 26.3139% ( 198) 00:09:39.332 14518.745 - 14619.569: 28.4564% ( 181) 00:09:39.332 14619.569 - 14720.394: 30.5634% ( 178) 00:09:39.332 14720.394 - 14821.218: 32.7178% ( 182) 00:09:39.332 14821.218 - 14922.043: 35.1681% ( 207) 00:09:39.332 14922.043 - 15022.868: 38.4115% ( 274) 00:09:39.332 15022.868 - 15123.692: 42.0455% ( 307) 00:09:39.332 15123.692 - 15224.517: 45.3125% ( 276) 00:09:39.332 15224.517 - 15325.342: 47.8338% ( 213) 00:09:39.332 15325.342 - 15426.166: 50.4498% ( 221) 00:09:39.332 15426.166 - 15526.991: 53.1842% ( 231) 00:09:39.332 15526.991 - 15627.815: 55.9067% ( 230) 00:09:39.332 15627.815 - 15728.640: 58.8660% ( 250) 00:09:39.332 15728.640 - 15829.465: 61.6951% ( 239) 00:09:39.332 15829.465 - 15930.289: 64.4295% ( 231) 00:09:39.332 15930.289 - 16031.114: 66.8324% ( 203) 00:09:39.332 16031.114 - 16131.938: 69.2708% ( 206) 00:09:39.332 16131.938 - 16232.763: 71.6856% ( 204) 00:09:39.332 16232.763 - 16333.588: 73.9465% ( 191) 00:09:39.332 16333.588 - 16434.412: 76.6098% ( 225) 00:09:39.333 16434.412 - 16535.237: 79.1430% ( 214) 00:09:39.333 16535.237 - 16636.062: 81.4276% ( 193) 00:09:39.333 16636.062 - 16736.886: 83.4044% ( 167) 00:09:39.333 16736.886 - 16837.711: 85.1089% ( 144) 00:09:39.333 16837.711 - 16938.535: 86.6596% ( 131) 00:09:39.333 16938.535 - 17039.360: 89.0507% ( 202) 00:09:39.333 17039.360 - 17140.185: 90.6723% ( 137) 00:09:39.333 17140.185 - 17241.009: 91.9034% ( 104) 00:09:39.333 17241.009 - 17341.834: 93.0634% ( 98) 00:09:39.333 17341.834 - 17442.658: 94.1880% ( 95) 00:09:39.333 17442.658 - 17543.483: 95.1349% ( 80) 00:09:39.333 17543.483 - 17644.308: 95.8807% ( 63) 00:09:39.333 17644.308 - 17745.132: 96.5080% ( 53) 00:09:39.333 17745.132 - 17845.957: 97.0526% ( 46) 00:09:39.333 17845.957 - 17946.782: 97.3958% ( 29) 00:09:39.333 17946.782 - 18047.606: 97.6799% ( 24) 00:09:39.333 18047.606 - 18148.431: 97.9048% ( 19) 00:09:39.333 18148.431 - 18249.255: 98.0587% ( 13) 00:09:39.333 18249.255 - 18350.080: 98.2244% ( 14) 00:09:39.333 18350.080 - 18450.905: 98.3191% ( 8) 00:09:39.333 18450.905 - 18551.729: 98.3783% ( 5) 00:09:39.333 18551.729 - 18652.554: 98.4375% ( 5) 00:09:39.333 18652.554 - 18753.378: 98.4848% ( 4) 00:09:39.333 19358.326 - 19459.151: 98.4967% ( 1) 00:09:39.333 19459.151 - 19559.975: 98.5440% ( 4) 00:09:39.333 19559.975 - 19660.800: 98.5795% ( 3) 00:09:39.333 19660.800 - 19761.625: 98.6269% ( 4) 00:09:39.333 19761.625 - 19862.449: 98.6624% ( 3) 00:09:39.333 19862.449 - 19963.274: 98.6979% ( 3) 00:09:39.333 19963.274 - 20064.098: 98.7453% ( 4) 00:09:39.333 20064.098 - 20164.923: 98.7926% ( 4) 00:09:39.333 20164.923 - 20265.748: 98.8281% ( 3) 00:09:39.333 20265.748 - 20366.572: 98.8755% ( 4) 00:09:39.333 20366.572 - 20467.397: 98.9110% ( 3) 00:09:39.333 20467.397 - 20568.222: 98.9465% ( 3) 00:09:39.333 20568.222 - 20669.046: 98.9938% ( 4) 00:09:39.333 20669.046 - 20769.871: 99.0412% ( 4) 00:09:39.333 20769.871 - 20870.695: 99.0767% ( 3) 00:09:39.333 20870.695 - 20971.520: 99.1241% ( 4) 00:09:39.333 20971.520 - 21072.345: 99.1596% ( 3) 00:09:39.333 21072.345 - 21173.169: 99.2069% ( 4) 00:09:39.333 21173.169 - 21273.994: 99.2424% ( 3) 00:09:39.333 21273.994 - 21374.818: 99.2898% ( 4) 00:09:39.333 21374.818 - 21475.643: 99.3371% ( 4) 00:09:39.333 21475.643 - 21576.468: 99.3726% ( 3) 00:09:39.333 21576.468 - 21677.292: 99.4081% ( 3) 00:09:39.333 21677.292 - 21778.117: 99.4437% ( 3) 00:09:39.333 21778.117 - 21878.942: 99.4792% ( 3) 00:09:39.333 21878.942 - 21979.766: 99.5265% ( 4) 00:09:39.333 21979.766 - 22080.591: 99.5620% ( 3) 00:09:39.333 22080.591 - 22181.415: 99.6094% ( 4) 00:09:39.333 22181.415 - 22282.240: 99.6567% ( 4) 00:09:39.333 22282.240 - 22383.065: 99.6922% ( 3) 00:09:39.333 22383.065 - 22483.889: 99.7396% ( 4) 00:09:39.333 22483.889 - 22584.714: 99.7751% ( 3) 00:09:39.333 22584.714 - 22685.538: 99.8224% ( 4) 00:09:39.333 22685.538 - 22786.363: 99.8698% ( 4) 00:09:39.333 22786.363 - 22887.188: 99.9053% ( 3) 00:09:39.333 22887.188 - 22988.012: 99.9527% ( 4) 00:09:39.333 22988.012 - 23088.837: 99.9882% ( 3) 00:09:39.333 23088.837 - 23189.662: 100.0000% ( 1) 00:09:39.333 00:09:39.333 20:16:54 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:39.333 00:09:39.333 real 0m2.783s 00:09:39.333 user 0m2.399s 00:09:39.333 sys 0m0.255s 00:09:39.333 20:16:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.333 20:16:54 -- common/autotest_common.sh@10 -- # set +x 00:09:39.333 ************************************ 00:09:39.333 END TEST nvme_perf 00:09:39.333 ************************************ 00:09:39.333 20:16:54 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:39.333 20:16:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:39.333 20:16:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:39.333 20:16:54 -- common/autotest_common.sh@10 -- # set +x 00:09:39.333 ************************************ 00:09:39.333 START TEST nvme_hello_world 00:09:39.333 ************************************ 00:09:39.333 20:16:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:39.595 Initializing NVMe Controllers 00:09:39.595 Attached to 0000:00:07.0 00:09:39.595 Namespace ID: 1 size: 5GB 00:09:39.595 Attached to 0000:00:09.0 00:09:39.595 Namespace ID: 1 size: 1GB 00:09:39.595 Attached to 0000:00:06.0 00:09:39.595 Namespace ID: 1 size: 6GB 00:09:39.595 Attached to 0000:00:08.0 00:09:39.595 Namespace ID: 1 size: 4GB 00:09:39.595 Namespace ID: 2 size: 4GB 00:09:39.595 Namespace ID: 3 size: 4GB 00:09:39.595 Initialization complete. 00:09:39.595 INFO: using host memory buffer for IO 00:09:39.595 Hello world! 00:09:39.595 INFO: using host memory buffer for IO 00:09:39.595 Hello world! 00:09:39.595 INFO: using host memory buffer for IO 00:09:39.595 Hello world! 00:09:39.595 INFO: using host memory buffer for IO 00:09:39.595 Hello world! 00:09:39.595 INFO: using host memory buffer for IO 00:09:39.595 Hello world! 00:09:39.595 INFO: using host memory buffer for IO 00:09:39.595 Hello world! 00:09:39.595 ************************************ 00:09:39.595 END TEST nvme_hello_world 00:09:39.595 ************************************ 00:09:39.595 00:09:39.595 real 0m0.289s 00:09:39.595 user 0m0.143s 00:09:39.595 sys 0m0.101s 00:09:39.595 20:16:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.595 20:16:54 -- common/autotest_common.sh@10 -- # set +x 00:09:39.855 20:16:54 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:39.855 20:16:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:39.855 20:16:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:39.855 20:16:54 -- common/autotest_common.sh@10 -- # set +x 00:09:39.855 ************************************ 00:09:39.855 START TEST nvme_sgl 00:09:39.855 ************************************ 00:09:39.855 20:16:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:39.855 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:39.855 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:39.855 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:40.116 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:40.116 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:40.116 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:40.116 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:40.116 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:40.116 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:40.116 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:40.116 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:40.116 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:40.116 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:40.116 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:40.116 NVMe Readv/Writev Request test 00:09:40.116 Attached to 0000:00:07.0 00:09:40.116 Attached to 0000:00:09.0 00:09:40.116 Attached to 0000:00:06.0 00:09:40.116 Attached to 0000:00:08.0 00:09:40.116 0000:00:07.0: build_io_request_2 test passed 00:09:40.116 0000:00:07.0: build_io_request_4 test passed 00:09:40.116 0000:00:07.0: build_io_request_5 test passed 00:09:40.116 0000:00:07.0: build_io_request_6 test passed 00:09:40.116 0000:00:07.0: build_io_request_7 test passed 00:09:40.116 0000:00:07.0: build_io_request_10 test passed 00:09:40.116 0000:00:06.0: build_io_request_2 test passed 00:09:40.116 0000:00:06.0: build_io_request_4 test passed 00:09:40.116 0000:00:06.0: build_io_request_5 test passed 00:09:40.116 0000:00:06.0: build_io_request_6 test passed 00:09:40.116 0000:00:06.0: build_io_request_7 test passed 00:09:40.116 0000:00:06.0: build_io_request_10 test passed 00:09:40.116 Cleaning up... 00:09:40.116 ************************************ 00:09:40.116 END TEST nvme_sgl 00:09:40.116 ************************************ 00:09:40.116 00:09:40.116 real 0m0.419s 00:09:40.116 user 0m0.266s 00:09:40.116 sys 0m0.108s 00:09:40.116 20:16:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.116 20:16:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.116 20:16:55 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:40.116 20:16:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:40.116 20:16:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.116 20:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:40.116 ************************************ 00:09:40.116 START TEST nvme_e2edp 00:09:40.116 ************************************ 00:09:40.116 20:16:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:40.377 NVMe Write/Read with End-to-End data protection test 00:09:40.377 Attached to 0000:00:07.0 00:09:40.377 Attached to 0000:00:09.0 00:09:40.377 Attached to 0000:00:06.0 00:09:40.377 Attached to 0000:00:08.0 00:09:40.377 Cleaning up... 00:09:40.377 ************************************ 00:09:40.377 END TEST nvme_e2edp 00:09:40.377 ************************************ 00:09:40.377 00:09:40.377 real 0m0.217s 00:09:40.377 user 0m0.059s 00:09:40.377 sys 0m0.111s 00:09:40.377 20:16:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.377 20:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:40.377 20:16:55 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:40.377 20:16:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:40.377 20:16:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.377 20:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:40.377 ************************************ 00:09:40.377 START TEST nvme_reserve 00:09:40.377 ************************************ 00:09:40.377 20:16:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:40.638 ===================================================== 00:09:40.638 NVMe Controller at PCI bus 0, device 7, function 0 00:09:40.638 ===================================================== 00:09:40.638 Reservations: Not Supported 00:09:40.638 ===================================================== 00:09:40.638 NVMe Controller at PCI bus 0, device 9, function 0 00:09:40.638 ===================================================== 00:09:40.638 Reservations: Not Supported 00:09:40.638 ===================================================== 00:09:40.638 NVMe Controller at PCI bus 0, device 6, function 0 00:09:40.638 ===================================================== 00:09:40.638 Reservations: Not Supported 00:09:40.638 ===================================================== 00:09:40.638 NVMe Controller at PCI bus 0, device 8, function 0 00:09:40.638 ===================================================== 00:09:40.638 Reservations: Not Supported 00:09:40.638 Reservation test passed 00:09:40.638 00:09:40.638 real 0m0.214s 00:09:40.638 user 0m0.066s 00:09:40.638 sys 0m0.098s 00:09:40.638 20:16:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.638 ************************************ 00:09:40.638 END TEST nvme_reserve 00:09:40.638 ************************************ 00:09:40.638 20:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:40.898 20:16:55 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:40.898 20:16:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:40.898 20:16:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.898 20:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:40.898 ************************************ 00:09:40.898 START TEST nvme_err_injection 00:09:40.898 ************************************ 00:09:40.898 20:16:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:41.159 NVMe Error Injection test 00:09:41.159 Attached to 0000:00:07.0 00:09:41.159 Attached to 0000:00:09.0 00:09:41.159 Attached to 0000:00:06.0 00:09:41.159 Attached to 0000:00:08.0 00:09:41.159 0000:00:07.0: get features failed as expected 00:09:41.159 0000:00:09.0: get features failed as expected 00:09:41.159 0000:00:06.0: get features failed as expected 00:09:41.159 0000:00:08.0: get features failed as expected 00:09:41.159 0000:00:08.0: get features successfully as expected 00:09:41.159 0000:00:07.0: get features successfully as expected 00:09:41.159 0000:00:09.0: get features successfully as expected 00:09:41.159 0000:00:06.0: get features successfully as expected 00:09:41.159 0000:00:08.0: read failed as expected 00:09:41.159 0000:00:09.0: read failed as expected 00:09:41.159 0000:00:06.0: read failed as expected 00:09:41.159 0000:00:07.0: read failed as expected 00:09:41.159 0000:00:09.0: read successfully as expected 00:09:41.159 0000:00:06.0: read successfully as expected 00:09:41.159 0000:00:07.0: read successfully as expected 00:09:41.159 0000:00:08.0: read successfully as expected 00:09:41.159 Cleaning up... 00:09:41.159 ************************************ 00:09:41.159 END TEST nvme_err_injection 00:09:41.159 ************************************ 00:09:41.159 00:09:41.159 real 0m0.292s 00:09:41.159 user 0m0.131s 00:09:41.159 sys 0m0.110s 00:09:41.159 20:16:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.159 20:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:41.159 20:16:55 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:41.159 20:16:55 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:41.159 20:16:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.159 20:16:55 -- common/autotest_common.sh@10 -- # set +x 00:09:41.159 ************************************ 00:09:41.159 START TEST nvme_overhead 00:09:41.159 ************************************ 00:09:41.159 20:16:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:42.563 Initializing NVMe Controllers 00:09:42.563 Attached to 0000:00:07.0 00:09:42.563 Attached to 0000:00:09.0 00:09:42.563 Attached to 0000:00:06.0 00:09:42.563 Attached to 0000:00:08.0 00:09:42.563 Initialization complete. Launching workers. 00:09:42.563 submit (in ns) avg, min, max = 12162.9, 10250.0, 341410.0 00:09:42.563 complete (in ns) avg, min, max = 8275.8, 7300.8, 406142.3 00:09:42.563 00:09:42.563 Submit histogram 00:09:42.563 ================ 00:09:42.563 Range in us Cumulative Count 00:09:42.563 10.240 - 10.289: 0.0167% ( 1) 00:09:42.563 10.535 - 10.585: 0.0334% ( 1) 00:09:42.563 10.683 - 10.732: 0.0501% ( 1) 00:09:42.563 10.732 - 10.782: 0.1337% ( 5) 00:09:42.563 10.782 - 10.831: 0.4179% ( 17) 00:09:42.563 10.831 - 10.880: 0.5850% ( 10) 00:09:42.563 10.880 - 10.929: 0.9527% ( 22) 00:09:42.563 10.929 - 10.978: 1.6213% ( 40) 00:09:42.563 10.978 - 11.028: 3.1422% ( 91) 00:09:42.563 11.028 - 11.077: 6.2343% ( 185) 00:09:42.563 11.077 - 11.126: 11.3488% ( 306) 00:09:42.563 11.126 - 11.175: 18.4690% ( 426) 00:09:42.563 11.175 - 11.225: 26.9263% ( 506) 00:09:42.563 11.225 - 11.274: 36.3864% ( 566) 00:09:42.563 11.274 - 11.323: 45.0109% ( 516) 00:09:42.563 11.323 - 11.372: 53.8359% ( 528) 00:09:42.563 11.372 - 11.422: 60.9895% ( 428) 00:09:42.563 11.422 - 11.471: 66.8227% ( 349) 00:09:42.563 11.471 - 11.520: 71.1182% ( 257) 00:09:42.563 11.520 - 11.569: 74.2604% ( 188) 00:09:42.563 11.569 - 11.618: 76.3497% ( 125) 00:09:42.563 11.618 - 11.668: 77.8038% ( 87) 00:09:42.563 11.668 - 11.717: 79.5587% ( 105) 00:09:42.563 11.717 - 11.766: 80.6117% ( 63) 00:09:42.563 11.766 - 11.815: 81.7817% ( 70) 00:09:42.563 11.815 - 11.865: 83.0018% ( 73) 00:09:42.563 11.865 - 11.914: 83.9211% ( 55) 00:09:42.563 11.914 - 11.963: 84.7234% ( 48) 00:09:42.563 11.963 - 12.012: 85.3251% ( 36) 00:09:42.563 12.012 - 12.062: 85.7931% ( 28) 00:09:42.563 12.062 - 12.111: 86.3112% ( 31) 00:09:42.563 12.111 - 12.160: 86.6789% ( 22) 00:09:42.563 12.160 - 12.209: 87.1971% ( 31) 00:09:42.563 12.209 - 12.258: 87.4979% ( 18) 00:09:42.563 12.258 - 12.308: 87.8155% ( 19) 00:09:42.563 12.308 - 12.357: 88.1330% ( 19) 00:09:42.563 12.357 - 12.406: 88.3503% ( 13) 00:09:42.563 12.406 - 12.455: 88.6345% ( 17) 00:09:42.563 12.455 - 12.505: 88.7515% ( 7) 00:09:42.563 12.505 - 12.554: 88.8517% ( 6) 00:09:42.563 12.554 - 12.603: 88.9687% ( 7) 00:09:42.563 12.603 - 12.702: 89.2863% ( 19) 00:09:42.563 12.702 - 12.800: 89.3699% ( 5) 00:09:42.563 12.800 - 12.898: 89.4869% ( 7) 00:09:42.563 12.898 - 12.997: 89.5537% ( 4) 00:09:42.563 12.997 - 13.095: 89.6039% ( 3) 00:09:42.563 13.095 - 13.194: 89.6540% ( 3) 00:09:42.563 13.194 - 13.292: 89.7376% ( 5) 00:09:42.563 13.292 - 13.391: 89.8044% ( 4) 00:09:42.563 13.391 - 13.489: 89.9382% ( 8) 00:09:42.563 13.489 - 13.588: 90.0719% ( 8) 00:09:42.563 13.588 - 13.686: 90.2223% ( 9) 00:09:42.563 13.686 - 13.785: 90.4563% ( 14) 00:09:42.563 13.785 - 13.883: 90.7070% ( 15) 00:09:42.563 13.883 - 13.982: 90.8574% ( 9) 00:09:42.563 13.982 - 14.080: 91.1416% ( 17) 00:09:42.563 14.080 - 14.178: 91.4090% ( 16) 00:09:42.563 14.178 - 14.277: 91.7098% ( 18) 00:09:42.563 14.277 - 14.375: 91.9606% ( 15) 00:09:42.563 14.375 - 14.474: 92.2113% ( 15) 00:09:42.563 14.474 - 14.572: 92.3951% ( 11) 00:09:42.563 14.572 - 14.671: 92.5790% ( 11) 00:09:42.563 14.671 - 14.769: 92.6458% ( 4) 00:09:42.563 14.769 - 14.868: 92.7795% ( 8) 00:09:42.563 14.868 - 14.966: 93.0470% ( 16) 00:09:42.563 14.966 - 15.065: 93.1473% ( 6) 00:09:42.563 15.065 - 15.163: 93.2810% ( 8) 00:09:42.563 15.163 - 15.262: 93.4147% ( 8) 00:09:42.563 15.262 - 15.360: 93.5317% ( 7) 00:09:42.563 15.360 - 15.458: 93.7322% ( 12) 00:09:42.563 15.458 - 15.557: 93.9328% ( 12) 00:09:42.563 15.557 - 15.655: 93.9830% ( 3) 00:09:42.563 15.655 - 15.754: 94.0999% ( 7) 00:09:42.563 15.754 - 15.852: 94.1668% ( 4) 00:09:42.563 15.852 - 15.951: 94.2169% ( 3) 00:09:42.563 15.951 - 16.049: 94.2838% ( 4) 00:09:42.563 16.049 - 16.148: 94.3841% ( 6) 00:09:42.563 16.148 - 16.246: 94.5178% ( 8) 00:09:42.563 16.246 - 16.345: 94.6348% ( 7) 00:09:42.563 16.345 - 16.443: 94.7351% ( 6) 00:09:42.563 16.443 - 16.542: 94.8521% ( 7) 00:09:42.563 16.542 - 16.640: 94.9189% ( 4) 00:09:42.563 16.640 - 16.738: 95.0192% ( 6) 00:09:42.563 16.738 - 16.837: 95.2198% ( 12) 00:09:42.563 16.837 - 16.935: 95.3869% ( 10) 00:09:42.563 16.935 - 17.034: 95.6209% ( 14) 00:09:42.563 17.034 - 17.132: 95.8382% ( 13) 00:09:42.563 17.132 - 17.231: 96.0388% ( 12) 00:09:42.563 17.231 - 17.329: 96.1892% ( 9) 00:09:42.563 17.329 - 17.428: 96.4399% ( 15) 00:09:42.563 17.428 - 17.526: 96.6238% ( 11) 00:09:42.563 17.526 - 17.625: 96.9580% ( 20) 00:09:42.563 17.625 - 17.723: 97.1419% ( 11) 00:09:42.563 17.723 - 17.822: 97.3759% ( 14) 00:09:42.563 17.822 - 17.920: 97.4929% ( 7) 00:09:42.563 17.920 - 18.018: 97.7603% ( 16) 00:09:42.563 18.018 - 18.117: 97.8940% ( 8) 00:09:42.563 18.117 - 18.215: 98.0110% ( 7) 00:09:42.563 18.215 - 18.314: 98.0612% ( 3) 00:09:42.563 18.314 - 18.412: 98.1280% ( 4) 00:09:42.563 18.412 - 18.511: 98.1615% ( 2) 00:09:42.563 18.511 - 18.609: 98.2785% ( 7) 00:09:42.563 18.609 - 18.708: 98.3453% ( 4) 00:09:42.563 18.708 - 18.806: 98.3620% ( 1) 00:09:42.563 18.806 - 18.905: 98.4289% ( 4) 00:09:42.563 19.003 - 19.102: 98.4790% ( 3) 00:09:42.563 19.102 - 19.200: 98.5292% ( 3) 00:09:42.563 19.200 - 19.298: 98.5459% ( 1) 00:09:42.563 19.298 - 19.397: 98.5793% ( 2) 00:09:42.563 19.495 - 19.594: 98.6462% ( 4) 00:09:42.563 19.594 - 19.692: 98.6796% ( 2) 00:09:42.563 19.692 - 19.791: 98.7130% ( 2) 00:09:42.564 19.791 - 19.889: 98.7297% ( 1) 00:09:42.564 19.988 - 20.086: 98.7464% ( 1) 00:09:42.564 20.086 - 20.185: 98.7632% ( 1) 00:09:42.564 20.283 - 20.382: 98.7799% ( 1) 00:09:42.564 20.382 - 20.480: 98.7966% ( 1) 00:09:42.564 20.480 - 20.578: 98.8133% ( 1) 00:09:42.564 20.578 - 20.677: 98.8300% ( 1) 00:09:42.564 20.677 - 20.775: 98.8467% ( 1) 00:09:42.564 20.775 - 20.874: 98.8634% ( 1) 00:09:42.564 21.071 - 21.169: 98.8802% ( 1) 00:09:42.564 21.366 - 21.465: 98.9136% ( 2) 00:09:42.564 21.662 - 21.760: 98.9303% ( 1) 00:09:42.564 21.760 - 21.858: 98.9470% ( 1) 00:09:42.564 21.858 - 21.957: 98.9804% ( 2) 00:09:42.564 22.252 - 22.351: 98.9972% ( 1) 00:09:42.564 24.123 - 24.222: 99.0139% ( 1) 00:09:42.564 24.320 - 24.418: 99.0473% ( 2) 00:09:42.564 25.994 - 26.191: 99.0640% ( 1) 00:09:42.564 26.191 - 26.388: 99.0974% ( 2) 00:09:42.564 28.357 - 28.554: 99.1142% ( 1) 00:09:42.564 28.948 - 29.145: 99.1309% ( 1) 00:09:42.564 29.342 - 29.538: 99.1810% ( 3) 00:09:42.564 29.538 - 29.735: 99.2479% ( 4) 00:09:42.564 29.735 - 29.932: 99.3983% ( 9) 00:09:42.564 29.932 - 30.129: 99.4484% ( 3) 00:09:42.564 30.129 - 30.326: 99.5654% ( 7) 00:09:42.564 30.326 - 30.523: 99.5989% ( 2) 00:09:42.564 30.523 - 30.720: 99.6156% ( 1) 00:09:42.564 30.720 - 30.917: 99.6490% ( 2) 00:09:42.564 30.917 - 31.114: 99.6657% ( 1) 00:09:42.564 31.311 - 31.508: 99.6991% ( 2) 00:09:42.564 31.508 - 31.705: 99.7159% ( 1) 00:09:42.564 32.098 - 32.295: 99.7326% ( 1) 00:09:42.564 35.446 - 35.643: 99.7493% ( 1) 00:09:42.564 38.006 - 38.203: 99.7660% ( 1) 00:09:42.564 38.203 - 38.400: 99.7827% ( 1) 00:09:42.564 38.597 - 38.794: 99.7994% ( 1) 00:09:42.564 39.582 - 39.778: 99.8161% ( 1) 00:09:42.564 47.458 - 47.655: 99.8329% ( 1) 00:09:42.564 48.640 - 48.837: 99.8496% ( 1) 00:09:42.564 55.532 - 55.926: 99.8663% ( 1) 00:09:42.564 55.926 - 56.320: 99.8830% ( 1) 00:09:42.564 58.289 - 58.683: 99.8997% ( 1) 00:09:42.564 58.683 - 59.077: 99.9164% ( 1) 00:09:42.564 59.077 - 59.471: 99.9331% ( 1) 00:09:42.564 87.434 - 87.828: 99.9499% ( 1) 00:09:42.564 89.009 - 89.403: 99.9666% ( 1) 00:09:42.564 302.474 - 304.049: 99.9833% ( 1) 00:09:42.564 340.283 - 341.858: 100.0000% ( 1) 00:09:42.564 00:09:42.564 Complete histogram 00:09:42.564 ================== 00:09:42.564 Range in us Cumulative Count 00:09:42.564 7.286 - 7.335: 0.2173% ( 13) 00:09:42.564 7.335 - 7.385: 2.0558% ( 110) 00:09:42.564 7.385 - 7.434: 9.4936% ( 445) 00:09:42.564 7.434 - 7.483: 20.7922% ( 676) 00:09:42.564 7.483 - 7.532: 30.4362% ( 577) 00:09:42.564 7.532 - 7.582: 36.7709% ( 379) 00:09:42.564 7.582 - 7.631: 40.6318% ( 231) 00:09:42.564 7.631 - 7.680: 43.6069% ( 178) 00:09:42.564 7.680 - 7.729: 45.1279% ( 91) 00:09:42.564 7.729 - 7.778: 46.1976% ( 64) 00:09:42.564 7.778 - 7.828: 47.4010% ( 72) 00:09:42.564 7.828 - 7.877: 49.9582% ( 153) 00:09:42.564 7.877 - 7.926: 55.3234% ( 321) 00:09:42.564 7.926 - 7.975: 61.9422% ( 396) 00:09:42.564 7.975 - 8.025: 67.4244% ( 328) 00:09:42.564 8.025 - 8.074: 72.3717% ( 296) 00:09:42.564 8.074 - 8.123: 77.6199% ( 314) 00:09:42.564 8.123 - 8.172: 81.6647% ( 242) 00:09:42.564 8.172 - 8.222: 84.3557% ( 161) 00:09:42.564 8.222 - 8.271: 86.3781% ( 121) 00:09:42.564 8.271 - 8.320: 88.2835% ( 114) 00:09:42.564 8.320 - 8.369: 89.1359% ( 51) 00:09:42.564 8.369 - 8.418: 90.0217% ( 53) 00:09:42.564 8.418 - 8.468: 90.8741% ( 51) 00:09:42.564 8.468 - 8.517: 92.0441% ( 70) 00:09:42.564 8.517 - 8.566: 92.8297% ( 47) 00:09:42.564 8.566 - 8.615: 93.4982% ( 40) 00:09:42.564 8.615 - 8.665: 94.1167% ( 37) 00:09:42.564 8.665 - 8.714: 94.5679% ( 27) 00:09:42.564 8.714 - 8.763: 94.9524% ( 23) 00:09:42.564 8.763 - 8.812: 95.2532% ( 18) 00:09:42.564 8.812 - 8.862: 95.4872% ( 14) 00:09:42.564 8.862 - 8.911: 95.6878% ( 12) 00:09:42.564 8.911 - 8.960: 95.9552% ( 16) 00:09:42.564 8.960 - 9.009: 96.0388% ( 5) 00:09:42.564 9.009 - 9.058: 96.1558% ( 7) 00:09:42.564 9.058 - 9.108: 96.2226% ( 4) 00:09:42.564 9.108 - 9.157: 96.2561% ( 2) 00:09:42.564 9.157 - 9.206: 96.2895% ( 2) 00:09:42.564 9.206 - 9.255: 96.3062% ( 1) 00:09:42.564 9.255 - 9.305: 96.3229% ( 1) 00:09:42.564 9.305 - 9.354: 96.3731% ( 3) 00:09:42.564 9.354 - 9.403: 96.4399% ( 4) 00:09:42.564 9.403 - 9.452: 96.4733% ( 2) 00:09:42.564 9.502 - 9.551: 96.5235% ( 3) 00:09:42.564 9.600 - 9.649: 96.5736% ( 3) 00:09:42.564 9.649 - 9.698: 96.6071% ( 2) 00:09:42.564 9.698 - 9.748: 96.6238% ( 1) 00:09:42.564 9.748 - 9.797: 96.6405% ( 1) 00:09:42.564 9.846 - 9.895: 96.6572% ( 1) 00:09:42.564 9.945 - 9.994: 96.6906% ( 2) 00:09:42.564 9.994 - 10.043: 96.7408% ( 3) 00:09:42.564 10.043 - 10.092: 96.7742% ( 2) 00:09:42.564 10.092 - 10.142: 96.7909% ( 1) 00:09:42.564 10.142 - 10.191: 96.8076% ( 1) 00:09:42.564 10.240 - 10.289: 96.8745% ( 4) 00:09:42.564 10.289 - 10.338: 96.9246% ( 3) 00:09:42.564 10.338 - 10.388: 96.9413% ( 1) 00:09:42.564 10.388 - 10.437: 97.0082% ( 4) 00:09:42.564 10.437 - 10.486: 97.0416% ( 2) 00:09:42.564 10.486 - 10.535: 97.0583% ( 1) 00:09:42.564 10.535 - 10.585: 97.0918% ( 2) 00:09:42.564 10.585 - 10.634: 97.1252% ( 2) 00:09:42.564 10.634 - 10.683: 97.1419% ( 1) 00:09:42.564 10.683 - 10.732: 97.2088% ( 4) 00:09:42.564 10.732 - 10.782: 97.2255% ( 1) 00:09:42.564 10.782 - 10.831: 97.2756% ( 3) 00:09:42.564 10.831 - 10.880: 97.2923% ( 1) 00:09:42.564 10.929 - 10.978: 97.3258% ( 2) 00:09:42.564 10.978 - 11.028: 97.3425% ( 1) 00:09:42.564 11.077 - 11.126: 97.3926% ( 3) 00:09:42.564 11.126 - 11.175: 97.4260% ( 2) 00:09:42.564 11.175 - 11.225: 97.4428% ( 1) 00:09:42.564 11.225 - 11.274: 97.5096% ( 4) 00:09:42.564 11.372 - 11.422: 97.5430% ( 2) 00:09:42.564 11.422 - 11.471: 97.5598% ( 1) 00:09:42.564 11.471 - 11.520: 97.5765% ( 1) 00:09:42.564 11.569 - 11.618: 97.6099% ( 2) 00:09:42.564 11.618 - 11.668: 97.6266% ( 1) 00:09:42.564 11.668 - 11.717: 97.6600% ( 2) 00:09:42.564 11.717 - 11.766: 97.6935% ( 2) 00:09:42.565 11.766 - 11.815: 97.7102% ( 1) 00:09:42.565 11.815 - 11.865: 97.7436% ( 2) 00:09:42.565 11.865 - 11.914: 97.7603% ( 1) 00:09:42.565 11.914 - 11.963: 97.7770% ( 1) 00:09:42.565 12.062 - 12.111: 97.8105% ( 2) 00:09:42.565 12.111 - 12.160: 97.8272% ( 1) 00:09:42.565 12.160 - 12.209: 97.8439% ( 1) 00:09:42.565 12.455 - 12.505: 97.8606% ( 1) 00:09:42.565 12.800 - 12.898: 97.8773% ( 1) 00:09:42.565 13.095 - 13.194: 97.9275% ( 3) 00:09:42.565 13.194 - 13.292: 97.9776% ( 3) 00:09:42.565 13.292 - 13.391: 97.9943% ( 1) 00:09:42.565 13.391 - 13.489: 98.0277% ( 2) 00:09:42.565 13.489 - 13.588: 98.0946% ( 4) 00:09:42.565 13.588 - 13.686: 98.1280% ( 2) 00:09:42.565 13.686 - 13.785: 98.1447% ( 1) 00:09:42.565 13.785 - 13.883: 98.2283% ( 5) 00:09:42.565 13.883 - 13.982: 98.2785% ( 3) 00:09:42.565 13.982 - 14.080: 98.2952% ( 1) 00:09:42.565 14.080 - 14.178: 98.3453% ( 3) 00:09:42.565 14.178 - 14.277: 98.4456% ( 6) 00:09:42.565 14.277 - 14.375: 98.4790% ( 2) 00:09:42.565 14.375 - 14.474: 98.5125% ( 2) 00:09:42.565 14.474 - 14.572: 98.6295% ( 7) 00:09:42.565 14.572 - 14.671: 98.6462% ( 1) 00:09:42.565 14.671 - 14.769: 98.7130% ( 4) 00:09:42.565 14.769 - 14.868: 98.7464% ( 2) 00:09:42.565 14.868 - 14.966: 98.7966% ( 3) 00:09:42.565 14.966 - 15.065: 98.8300% ( 2) 00:09:42.565 15.163 - 15.262: 98.8802% ( 3) 00:09:42.565 15.557 - 15.655: 98.8969% ( 1) 00:09:42.565 15.655 - 15.754: 98.9303% ( 2) 00:09:42.565 15.852 - 15.951: 98.9470% ( 1) 00:09:42.565 16.148 - 16.246: 98.9637% ( 1) 00:09:42.565 16.443 - 16.542: 98.9804% ( 1) 00:09:42.565 16.542 - 16.640: 98.9972% ( 1) 00:09:42.565 17.723 - 17.822: 99.0306% ( 2) 00:09:42.565 17.822 - 17.920: 99.0473% ( 1) 00:09:42.565 18.117 - 18.215: 99.0640% ( 1) 00:09:42.565 18.215 - 18.314: 99.0807% ( 1) 00:09:42.565 18.806 - 18.905: 99.0974% ( 1) 00:09:42.565 19.495 - 19.594: 99.1142% ( 1) 00:09:42.565 20.086 - 20.185: 99.1309% ( 1) 00:09:42.565 20.578 - 20.677: 99.1476% ( 1) 00:09:42.565 22.055 - 22.154: 99.1810% ( 2) 00:09:42.565 22.252 - 22.351: 99.1977% ( 1) 00:09:42.565 22.351 - 22.449: 99.2144% ( 1) 00:09:42.565 22.449 - 22.548: 99.2813% ( 4) 00:09:42.565 22.548 - 22.646: 99.3314% ( 3) 00:09:42.565 22.646 - 22.745: 99.3983% ( 4) 00:09:42.565 22.745 - 22.843: 99.4484% ( 3) 00:09:42.565 22.843 - 22.942: 99.4819% ( 2) 00:09:42.565 22.942 - 23.040: 99.5153% ( 2) 00:09:42.565 23.434 - 23.532: 99.5320% ( 1) 00:09:42.565 23.729 - 23.828: 99.5487% ( 1) 00:09:42.565 23.828 - 23.926: 99.5654% ( 1) 00:09:42.565 24.025 - 24.123: 99.5821% ( 1) 00:09:42.565 24.123 - 24.222: 99.6156% ( 2) 00:09:42.565 24.320 - 24.418: 99.6323% ( 1) 00:09:42.565 24.418 - 24.517: 99.6490% ( 1) 00:09:42.565 24.714 - 24.812: 99.6657% ( 1) 00:09:42.565 24.812 - 24.911: 99.6824% ( 1) 00:09:42.565 25.108 - 25.206: 99.6991% ( 1) 00:09:42.565 25.206 - 25.403: 99.7326% ( 2) 00:09:42.565 25.600 - 25.797: 99.7660% ( 2) 00:09:42.565 28.357 - 28.554: 99.7827% ( 1) 00:09:42.565 28.751 - 28.948: 99.7994% ( 1) 00:09:42.565 37.022 - 37.218: 99.8161% ( 1) 00:09:42.565 39.975 - 40.172: 99.8329% ( 1) 00:09:42.565 43.323 - 43.520: 99.8663% ( 2) 00:09:42.565 47.065 - 47.262: 99.8830% ( 1) 00:09:42.565 61.046 - 61.440: 99.8997% ( 1) 00:09:42.565 61.440 - 61.834: 99.9164% ( 1) 00:09:42.565 65.772 - 66.166: 99.9331% ( 1) 00:09:42.565 69.317 - 69.711: 99.9499% ( 1) 00:09:42.565 214.252 - 215.828: 99.9666% ( 1) 00:09:42.565 261.514 - 263.089: 99.9833% ( 1) 00:09:42.565 403.298 - 406.449: 100.0000% ( 1) 00:09:42.565 00:09:42.565 00:09:42.565 real 0m1.226s 00:09:42.565 user 0m1.068s 00:09:42.565 sys 0m0.108s 00:09:42.565 20:16:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.565 20:16:57 -- common/autotest_common.sh@10 -- # set +x 00:09:42.565 ************************************ 00:09:42.565 END TEST nvme_overhead 00:09:42.565 ************************************ 00:09:42.565 20:16:57 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:42.565 20:16:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:42.565 20:16:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:42.565 20:16:57 -- common/autotest_common.sh@10 -- # set +x 00:09:42.565 ************************************ 00:09:42.565 START TEST nvme_arbitration 00:09:42.565 ************************************ 00:09:42.565 20:16:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:45.884 Initializing NVMe Controllers 00:09:45.884 Attached to 0000:00:07.0 00:09:45.884 Attached to 0000:00:09.0 00:09:45.884 Attached to 0000:00:06.0 00:09:45.884 Attached to 0000:00:08.0 00:09:45.884 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:09:45.884 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:09:45.884 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:09:45.884 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:45.884 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:45.884 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:45.884 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:45.884 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:45.884 Initialization complete. Launching workers. 00:09:45.884 Starting thread on core 1 with urgent priority queue 00:09:45.884 Starting thread on core 2 with urgent priority queue 00:09:45.884 Starting thread on core 3 with urgent priority queue 00:09:45.884 Starting thread on core 0 with urgent priority queue 00:09:45.884 QEMU NVMe Ctrl (12341 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:45.884 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:45.884 QEMU NVMe Ctrl (12343 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:45.884 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:45.884 QEMU NVMe Ctrl (12340 ) core 2: 832.00 IO/s 120.19 secs/100000 ios 00:09:45.884 QEMU NVMe Ctrl (12342 ) core 3: 832.00 IO/s 120.19 secs/100000 ios 00:09:45.884 ======================================================== 00:09:45.884 00:09:45.884 00:09:45.884 real 0m3.405s 00:09:45.884 user 0m9.532s 00:09:45.884 sys 0m0.111s 00:09:45.884 ************************************ 00:09:45.884 END TEST nvme_arbitration 00:09:45.884 ************************************ 00:09:45.884 20:17:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.884 20:17:00 -- common/autotest_common.sh@10 -- # set +x 00:09:45.884 20:17:00 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:45.885 20:17:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:45.885 20:17:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:45.885 20:17:00 -- common/autotest_common.sh@10 -- # set +x 00:09:45.885 ************************************ 00:09:45.885 START TEST nvme_single_aen 00:09:45.885 ************************************ 00:09:45.885 20:17:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:45.885 [2024-10-16 20:17:00.703713] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:45.885 [2024-10-16 20:17:00.703784] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.145 [2024-10-16 20:17:00.852243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:46.145 [2024-10-16 20:17:00.853843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:46.145 [2024-10-16 20:17:00.855083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:46.145 [2024-10-16 20:17:00.856399] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:46.145 Asynchronous Event Request test 00:09:46.145 Attached to 0000:00:07.0 00:09:46.145 Attached to 0000:00:09.0 00:09:46.145 Attached to 0000:00:06.0 00:09:46.145 Attached to 0000:00:08.0 00:09:46.145 Reset controller to setup AER completions for this process 00:09:46.145 Registering asynchronous event callbacks... 00:09:46.145 Getting orig temperature thresholds of all controllers 00:09:46.145 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:46.145 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:46.145 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:46.145 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:46.145 Setting all controllers temperature threshold low to trigger AER 00:09:46.145 Waiting for all controllers temperature threshold to be set lower 00:09:46.145 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:46.145 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:46.145 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:46.145 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:46.145 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:46.145 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:46.145 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:46.145 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:46.145 Waiting for all controllers to trigger AER and reset threshold 00:09:46.145 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.145 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.145 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.145 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.145 Cleaning up... 00:09:46.145 00:09:46.145 real 0m0.230s 00:09:46.145 user 0m0.067s 00:09:46.145 sys 0m0.115s 00:09:46.145 20:17:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.145 20:17:00 -- common/autotest_common.sh@10 -- # set +x 00:09:46.145 ************************************ 00:09:46.145 END TEST nvme_single_aen 00:09:46.145 ************************************ 00:09:46.145 20:17:00 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:46.145 20:17:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:46.145 20:17:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:46.145 20:17:00 -- common/autotest_common.sh@10 -- # set +x 00:09:46.145 ************************************ 00:09:46.145 START TEST nvme_doorbell_aers 00:09:46.145 ************************************ 00:09:46.145 20:17:00 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:09:46.145 20:17:00 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:46.145 20:17:00 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:46.145 20:17:00 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:46.145 20:17:00 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:46.145 20:17:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:46.145 20:17:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:46.145 20:17:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:46.145 20:17:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:46.145 20:17:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:46.145 20:17:01 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:46.145 20:17:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:46.145 20:17:01 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:46.145 20:17:01 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:46.406 [2024-10-16 20:17:01.237539] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:09:56.401 Executing: test_write_invalid_db 00:09:56.401 Waiting for AER completion... 00:09:56.401 Failure: test_write_invalid_db 00:09:56.401 00:09:56.401 Executing: test_invalid_db_write_overflow_sq 00:09:56.401 Waiting for AER completion... 00:09:56.401 Failure: test_invalid_db_write_overflow_sq 00:09:56.401 00:09:56.401 Executing: test_invalid_db_write_overflow_cq 00:09:56.401 Waiting for AER completion... 00:09:56.401 Failure: test_invalid_db_write_overflow_cq 00:09:56.401 00:09:56.401 20:17:11 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:56.401 20:17:11 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:56.401 [2024-10-16 20:17:11.306851] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:06.381 Executing: test_write_invalid_db 00:10:06.381 Waiting for AER completion... 00:10:06.381 Failure: test_write_invalid_db 00:10:06.381 00:10:06.381 Executing: test_invalid_db_write_overflow_sq 00:10:06.381 Waiting for AER completion... 00:10:06.381 Failure: test_invalid_db_write_overflow_sq 00:10:06.381 00:10:06.381 Executing: test_invalid_db_write_overflow_cq 00:10:06.381 Waiting for AER completion... 00:10:06.381 Failure: test_invalid_db_write_overflow_cq 00:10:06.381 00:10:06.381 20:17:21 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:06.381 20:17:21 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:06.639 [2024-10-16 20:17:21.314189] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:16.622 Executing: test_write_invalid_db 00:10:16.622 Waiting for AER completion... 00:10:16.622 Failure: test_write_invalid_db 00:10:16.622 00:10:16.623 Executing: test_invalid_db_write_overflow_sq 00:10:16.623 Waiting for AER completion... 00:10:16.623 Failure: test_invalid_db_write_overflow_sq 00:10:16.623 00:10:16.623 Executing: test_invalid_db_write_overflow_cq 00:10:16.623 Waiting for AER completion... 00:10:16.623 Failure: test_invalid_db_write_overflow_cq 00:10:16.623 00:10:16.623 20:17:31 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:16.623 20:17:31 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:16.623 [2024-10-16 20:17:31.325410] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 Executing: test_write_invalid_db 00:10:26.632 Waiting for AER completion... 00:10:26.632 Failure: test_write_invalid_db 00:10:26.632 00:10:26.632 Executing: test_invalid_db_write_overflow_sq 00:10:26.632 Waiting for AER completion... 00:10:26.632 Failure: test_invalid_db_write_overflow_sq 00:10:26.632 00:10:26.632 Executing: test_invalid_db_write_overflow_cq 00:10:26.632 Waiting for AER completion... 00:10:26.632 Failure: test_invalid_db_write_overflow_cq 00:10:26.632 00:10:26.632 00:10:26.632 real 0m40.199s 00:10:26.632 user 0m34.194s 00:10:26.632 sys 0m5.612s 00:10:26.632 20:17:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.632 20:17:41 -- common/autotest_common.sh@10 -- # set +x 00:10:26.632 ************************************ 00:10:26.632 END TEST nvme_doorbell_aers 00:10:26.632 ************************************ 00:10:26.632 20:17:41 -- nvme/nvme.sh@97 -- # uname 00:10:26.632 20:17:41 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:26.632 20:17:41 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:26.632 20:17:41 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:26.632 20:17:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.632 20:17:41 -- common/autotest_common.sh@10 -- # set +x 00:10:26.632 ************************************ 00:10:26.632 START TEST nvme_multi_aen 00:10:26.632 ************************************ 00:10:26.632 20:17:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:26.632 [2024-10-16 20:17:41.232351] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:26.632 [2024-10-16 20:17:41.232579] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.632 [2024-10-16 20:17:41.367590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:26.632 [2024-10-16 20:17:41.367713] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.367786] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.367815] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.369093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:26.632 [2024-10-16 20:17:41.369180] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.369238] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.369265] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.370136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:26.632 [2024-10-16 20:17:41.370202] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.370266] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.370293] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.371173] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:26.632 [2024-10-16 20:17:41.371233] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.371294] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 [2024-10-16 20:17:41.371327] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64041) is not found. Dropping the request. 00:10:26.632 Child process pid: 64562 00:10:26.632 [2024-10-16 20:17:41.375038] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:26.632 [2024-10-16 20:17:41.375116] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.632 [Child] Asynchronous Event Request test 00:10:26.632 [Child] Attached to 0000:00:07.0 00:10:26.632 [Child] Attached to 0000:00:09.0 00:10:26.632 [Child] Attached to 0000:00:06.0 00:10:26.632 [Child] Attached to 0000:00:08.0 00:10:26.632 [Child] Registering asynchronous event callbacks... 00:10:26.632 [Child] Getting orig temperature thresholds of all controllers 00:10:26.632 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:26.632 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:26.632 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:26.632 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:26.632 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:26.632 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:26.632 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:26.632 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:26.632 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:26.632 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.632 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.632 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.632 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.632 [Child] Cleaning up... 00:10:26.891 Asynchronous Event Request test 00:10:26.891 Attached to 0000:00:07.0 00:10:26.891 Attached to 0000:00:09.0 00:10:26.891 Attached to 0000:00:06.0 00:10:26.891 Attached to 0000:00:08.0 00:10:26.891 Reset controller to setup AER completions for this process 00:10:26.891 Registering asynchronous event callbacks... 00:10:26.891 Getting orig temperature thresholds of all controllers 00:10:26.891 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:26.891 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:26.891 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:26.891 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:26.891 Setting all controllers temperature threshold low to trigger AER 00:10:26.891 Waiting for all controllers temperature threshold to be set lower 00:10:26.891 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:26.891 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:26.891 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:26.891 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:26.891 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:26.891 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:26.891 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:26.891 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:26.891 Waiting for all controllers to trigger AER and reset threshold 00:10:26.891 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.891 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.891 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.891 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:26.891 Cleaning up... 00:10:26.891 00:10:26.891 real 0m0.397s 00:10:26.891 user 0m0.124s 00:10:26.891 sys 0m0.162s 00:10:26.891 20:17:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.891 20:17:41 -- common/autotest_common.sh@10 -- # set +x 00:10:26.891 ************************************ 00:10:26.891 END TEST nvme_multi_aen 00:10:26.891 ************************************ 00:10:26.891 20:17:41 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:26.891 20:17:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:26.891 20:17:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.891 20:17:41 -- common/autotest_common.sh@10 -- # set +x 00:10:26.891 ************************************ 00:10:26.891 START TEST nvme_startup 00:10:26.891 ************************************ 00:10:26.891 20:17:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:27.149 Initializing NVMe Controllers 00:10:27.149 Attached to 0000:00:07.0 00:10:27.149 Attached to 0000:00:09.0 00:10:27.149 Attached to 0000:00:06.0 00:10:27.149 Attached to 0000:00:08.0 00:10:27.149 Initialization complete. 00:10:27.149 Time used:143511.578 (us). 00:10:27.149 00:10:27.149 real 0m0.200s 00:10:27.149 user 0m0.056s 00:10:27.149 sys 0m0.096s 00:10:27.149 ************************************ 00:10:27.149 END TEST nvme_startup 00:10:27.149 ************************************ 00:10:27.149 20:17:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.149 20:17:41 -- common/autotest_common.sh@10 -- # set +x 00:10:27.149 20:17:41 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:27.149 20:17:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:27.149 20:17:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:27.149 20:17:41 -- common/autotest_common.sh@10 -- # set +x 00:10:27.149 ************************************ 00:10:27.149 START TEST nvme_multi_secondary 00:10:27.149 ************************************ 00:10:27.149 20:17:41 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:10:27.149 20:17:41 -- nvme/nvme.sh@52 -- # pid0=64607 00:10:27.149 20:17:41 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:27.149 20:17:41 -- nvme/nvme.sh@54 -- # pid1=64608 00:10:27.149 20:17:41 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:27.150 20:17:41 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:30.429 Initializing NVMe Controllers 00:10:30.429 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:30.429 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:30.429 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:30.429 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:30.429 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:30.429 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:30.429 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:30.429 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:30.429 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:30.429 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:30.429 Initialization complete. Launching workers. 00:10:30.429 ======================================================== 00:10:30.429 Latency(us) 00:10:30.429 Device Information : IOPS MiB/s Average min max 00:10:30.429 PCIE (0000:00:07.0) NSID 1 from core 1: 7876.31 30.77 2031.00 757.58 6133.79 00:10:30.429 PCIE (0000:00:09.0) NSID 1 from core 1: 7876.31 30.77 2031.01 751.53 6039.58 00:10:30.429 PCIE (0000:00:06.0) NSID 1 from core 1: 7876.31 30.77 2030.06 735.49 5105.16 00:10:30.429 PCIE (0000:00:08.0) NSID 1 from core 1: 7876.31 30.77 2030.94 742.98 5580.59 00:10:30.429 PCIE (0000:00:08.0) NSID 2 from core 1: 7876.31 30.77 2030.91 768.59 5913.32 00:10:30.429 PCIE (0000:00:08.0) NSID 3 from core 1: 7876.31 30.77 2030.90 773.38 6593.88 00:10:30.429 ======================================================== 00:10:30.429 Total : 47257.89 184.60 2030.80 735.49 6593.88 00:10:30.429 00:10:30.687 Initializing NVMe Controllers 00:10:30.687 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:30.687 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:30.687 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:30.687 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:30.687 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:30.687 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:30.687 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:30.687 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:30.687 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:30.687 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:30.687 Initialization complete. Launching workers. 00:10:30.687 ======================================================== 00:10:30.687 Latency(us) 00:10:30.687 Device Information : IOPS MiB/s Average min max 00:10:30.687 PCIE (0000:00:07.0) NSID 1 from core 2: 3194.35 12.48 5008.45 1092.88 13718.02 00:10:30.687 PCIE (0000:00:09.0) NSID 1 from core 2: 3194.35 12.48 5008.52 1113.45 13735.21 00:10:30.687 PCIE (0000:00:06.0) NSID 1 from core 2: 3194.35 12.48 5006.78 1088.75 12743.46 00:10:30.687 PCIE (0000:00:08.0) NSID 1 from core 2: 3194.35 12.48 5007.99 1104.90 12276.12 00:10:30.687 PCIE (0000:00:08.0) NSID 2 from core 2: 3194.35 12.48 5008.30 1096.38 15649.96 00:10:30.687 PCIE (0000:00:08.0) NSID 3 from core 2: 3194.35 12.48 5008.28 949.28 12975.74 00:10:30.687 ======================================================== 00:10:30.687 Total : 19166.13 74.87 5008.05 949.28 15649.96 00:10:30.687 00:10:30.687 20:17:45 -- nvme/nvme.sh@56 -- # wait 64607 00:10:32.587 Initializing NVMe Controllers 00:10:32.587 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:32.587 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:32.587 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:32.587 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:32.587 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:32.587 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:32.587 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:32.587 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:32.587 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:32.587 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:32.587 Initialization complete. Launching workers. 00:10:32.587 ======================================================== 00:10:32.587 Latency(us) 00:10:32.587 Device Information : IOPS MiB/s Average min max 00:10:32.587 PCIE (0000:00:07.0) NSID 1 from core 0: 11311.82 44.19 1414.10 711.91 7289.83 00:10:32.587 PCIE (0000:00:09.0) NSID 1 from core 0: 11311.82 44.19 1414.09 717.98 7435.15 00:10:32.587 PCIE (0000:00:06.0) NSID 1 from core 0: 11311.82 44.19 1413.27 692.91 7368.44 00:10:32.587 PCIE (0000:00:08.0) NSID 1 from core 0: 11311.82 44.19 1414.06 698.17 7220.48 00:10:32.587 PCIE (0000:00:08.0) NSID 2 from core 0: 11315.02 44.20 1413.64 637.70 6376.09 00:10:32.587 PCIE (0000:00:08.0) NSID 3 from core 0: 11311.82 44.19 1414.03 607.69 6700.64 00:10:32.587 ======================================================== 00:10:32.587 Total : 67874.10 265.13 1413.87 607.69 7435.15 00:10:32.587 00:10:32.587 20:17:47 -- nvme/nvme.sh@57 -- # wait 64608 00:10:32.587 20:17:47 -- nvme/nvme.sh@61 -- # pid0=64677 00:10:32.587 20:17:47 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:32.587 20:17:47 -- nvme/nvme.sh@63 -- # pid1=64678 00:10:32.587 20:17:47 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:32.587 20:17:47 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:35.868 Initializing NVMe Controllers 00:10:35.868 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:35.868 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:35.868 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:35.868 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:35.868 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:35.868 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:35.868 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:35.868 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:35.868 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:35.868 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:35.868 Initialization complete. Launching workers. 00:10:35.868 ======================================================== 00:10:35.868 Latency(us) 00:10:35.868 Device Information : IOPS MiB/s Average min max 00:10:35.868 PCIE (0000:00:07.0) NSID 1 from core 0: 7749.38 30.27 2064.26 767.48 6505.91 00:10:35.868 PCIE (0000:00:09.0) NSID 1 from core 0: 7749.38 30.27 2064.35 775.69 6261.37 00:10:35.868 PCIE (0000:00:06.0) NSID 1 from core 0: 7749.38 30.27 2063.36 756.38 6441.23 00:10:35.868 PCIE (0000:00:08.0) NSID 1 from core 0: 7749.38 30.27 2064.29 776.17 6124.39 00:10:35.868 PCIE (0000:00:08.0) NSID 2 from core 0: 7749.38 30.27 2064.26 760.03 6512.92 00:10:35.868 PCIE (0000:00:08.0) NSID 3 from core 0: 7749.38 30.27 2064.24 761.35 5932.56 00:10:35.868 ======================================================== 00:10:35.868 Total : 46496.31 181.63 2064.13 756.38 6512.92 00:10:35.868 00:10:35.868 Initializing NVMe Controllers 00:10:35.868 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:35.868 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:35.868 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:35.868 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:35.868 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:35.868 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:35.868 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:35.868 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:35.868 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:35.868 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:35.868 Initialization complete. Launching workers. 00:10:35.868 ======================================================== 00:10:35.868 Latency(us) 00:10:35.868 Device Information : IOPS MiB/s Average min max 00:10:35.868 PCIE (0000:00:07.0) NSID 1 from core 1: 7892.12 30.83 2026.93 730.07 5586.33 00:10:35.868 PCIE (0000:00:09.0) NSID 1 from core 1: 7892.12 30.83 2026.89 737.83 5255.57 00:10:35.868 PCIE (0000:00:06.0) NSID 1 from core 1: 7892.12 30.83 2025.92 713.96 5444.83 00:10:35.868 PCIE (0000:00:08.0) NSID 1 from core 1: 7892.12 30.83 2026.78 724.87 5448.30 00:10:35.868 PCIE (0000:00:08.0) NSID 2 from core 1: 7892.12 30.83 2026.73 636.15 6002.37 00:10:35.868 PCIE (0000:00:08.0) NSID 3 from core 1: 7892.12 30.83 2026.68 626.36 5349.96 00:10:35.868 ======================================================== 00:10:35.868 Total : 47352.74 184.97 2026.65 626.36 6002.37 00:10:35.868 00:10:38.411 Initializing NVMe Controllers 00:10:38.411 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:38.411 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:38.411 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:38.411 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:38.411 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:38.411 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:38.411 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:38.411 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:38.411 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:38.411 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:38.411 Initialization complete. Launching workers. 00:10:38.411 ======================================================== 00:10:38.411 Latency(us) 00:10:38.411 Device Information : IOPS MiB/s Average min max 00:10:38.411 PCIE (0000:00:07.0) NSID 1 from core 2: 4703.37 18.37 3402.33 789.17 12757.33 00:10:38.411 PCIE (0000:00:09.0) NSID 1 from core 2: 4703.37 18.37 3403.98 793.75 12903.48 00:10:38.411 PCIE (0000:00:06.0) NSID 1 from core 2: 4703.37 18.37 3402.84 774.88 13406.30 00:10:38.411 PCIE (0000:00:08.0) NSID 1 from core 2: 4703.37 18.37 3403.87 729.08 12911.81 00:10:38.411 PCIE (0000:00:08.0) NSID 2 from core 2: 4703.37 18.37 3400.74 799.11 12292.44 00:10:38.411 PCIE (0000:00:08.0) NSID 3 from core 2: 4703.37 18.37 3400.84 787.26 12357.25 00:10:38.411 ======================================================== 00:10:38.411 Total : 28220.20 110.24 3402.43 729.08 13406.30 00:10:38.411 00:10:38.411 20:17:52 -- nvme/nvme.sh@65 -- # wait 64677 00:10:38.411 ************************************ 00:10:38.411 END TEST nvme_multi_secondary 00:10:38.411 ************************************ 00:10:38.411 20:17:52 -- nvme/nvme.sh@66 -- # wait 64678 00:10:38.411 00:10:38.411 real 0m10.925s 00:10:38.411 user 0m18.643s 00:10:38.411 sys 0m0.639s 00:10:38.411 20:17:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.411 20:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:38.411 20:17:52 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:38.411 20:17:52 -- nvme/nvme.sh@102 -- # kill_stub 00:10:38.411 20:17:52 -- common/autotest_common.sh@1065 -- # [[ -e /proc/63615 ]] 00:10:38.411 20:17:52 -- common/autotest_common.sh@1066 -- # kill 63615 00:10:38.411 20:17:52 -- common/autotest_common.sh@1067 -- # wait 63615 00:10:38.996 [2024-10-16 20:17:53.652678] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:38.996 [2024-10-16 20:17:53.652840] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:38.996 [2024-10-16 20:17:53.652855] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:38.996 [2024-10-16 20:17:53.652865] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:39.263 [2024-10-16 20:17:54.166621] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:39.263 [2024-10-16 20:17:54.166687] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:39.263 [2024-10-16 20:17:54.166704] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:39.263 [2024-10-16 20:17:54.166720] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:39.829 [2024-10-16 20:17:54.681206] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:39.829 [2024-10-16 20:17:54.681352] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:39.829 [2024-10-16 20:17:54.681367] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:39.829 [2024-10-16 20:17:54.681378] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:41.743 [2024-10-16 20:17:56.185685] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:41.743 [2024-10-16 20:17:56.185729] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:41.743 [2024-10-16 20:17:56.185740] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:41.743 [2024-10-16 20:17:56.185753] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64561) is not found. Dropping the request. 00:10:41.743 20:17:56 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:10:41.743 20:17:56 -- common/autotest_common.sh@1073 -- # echo 2 00:10:41.743 20:17:56 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:41.743 20:17:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:41.743 20:17:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:41.743 20:17:56 -- common/autotest_common.sh@10 -- # set +x 00:10:41.743 ************************************ 00:10:41.743 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:41.743 ************************************ 00:10:41.743 20:17:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:41.743 * Looking for test storage... 00:10:41.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:41.743 20:17:56 -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:41.743 20:17:56 -- common/autotest_common.sh@1509 -- # local bdfs 00:10:41.743 20:17:56 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:41.743 20:17:56 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:41.743 20:17:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:41.743 20:17:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:41.743 20:17:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:41.743 20:17:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:41.743 20:17:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:41.743 20:17:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:41.743 20:17:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:41.743 20:17:56 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:10:41.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64871 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:41.743 20:17:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64871 00:10:41.743 20:17:56 -- common/autotest_common.sh@819 -- # '[' -z 64871 ']' 00:10:41.743 20:17:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.743 20:17:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:41.743 20:17:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.743 20:17:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:41.743 20:17:56 -- common/autotest_common.sh@10 -- # set +x 00:10:41.743 [2024-10-16 20:17:56.557562] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:41.743 [2024-10-16 20:17:56.558188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64871 ] 00:10:42.004 [2024-10-16 20:17:56.724008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.004 [2024-10-16 20:17:56.896191] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:42.004 [2024-10-16 20:17:56.896618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.004 [2024-10-16 20:17:56.896963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.004 [2024-10-16 20:17:56.897082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.004 [2024-10-16 20:17:56.897178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.382 20:17:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:43.382 20:17:58 -- common/autotest_common.sh@852 -- # return 0 00:10:43.382 20:17:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:43.382 20:17:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:43.382 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:10:43.382 nvme0n1 00:10:43.382 20:17:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:43.382 20:17:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:43.382 20:17:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_bL1Cd.txt 00:10:43.382 20:17:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:43.382 20:17:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:43.382 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:10:43.382 true 00:10:43.382 20:17:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:43.382 20:17:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:43.382 20:17:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1729109878 00:10:43.382 20:17:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64901 00:10:43.382 20:17:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:43.382 20:17:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:43.382 20:17:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:45.291 20:18:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.291 20:18:00 -- common/autotest_common.sh@10 -- # set +x 00:10:45.291 [2024-10-16 20:18:00.132424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:45.291 [2024-10-16 20:18:00.132750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:45.291 [2024-10-16 20:18:00.132824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:45.291 [2024-10-16 20:18:00.132885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.291 [2024-10-16 20:18:00.134427] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:45.291 20:18:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:45.291 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64901 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64901 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64901 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:45.291 20:18:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.291 20:18:00 -- common/autotest_common.sh@10 -- # set +x 00:10:45.291 20:18:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_bL1Cd.txt 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_bL1Cd.txt 00:10:45.291 20:18:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64871 00:10:45.291 20:18:00 -- common/autotest_common.sh@926 -- # '[' -z 64871 ']' 00:10:45.291 20:18:00 -- common/autotest_common.sh@930 -- # kill -0 64871 00:10:45.291 20:18:00 -- common/autotest_common.sh@931 -- # uname 00:10:45.550 20:18:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:45.550 20:18:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64871 00:10:45.550 killing process with pid 64871 00:10:45.550 20:18:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:45.550 20:18:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:45.550 20:18:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64871' 00:10:45.550 20:18:00 -- common/autotest_common.sh@945 -- # kill 64871 00:10:45.550 20:18:00 -- common/autotest_common.sh@950 -- # wait 64871 00:10:46.928 20:18:01 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:46.928 20:18:01 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:46.928 00:10:46.928 real 0m5.062s 00:10:46.928 user 0m18.163s 00:10:46.928 sys 0m0.462s 00:10:46.928 20:18:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.928 20:18:01 -- common/autotest_common.sh@10 -- # set +x 00:10:46.928 ************************************ 00:10:46.928 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:46.928 ************************************ 00:10:46.928 20:18:01 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:46.928 20:18:01 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:46.928 20:18:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:46.928 20:18:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:46.928 20:18:01 -- common/autotest_common.sh@10 -- # set +x 00:10:46.928 ************************************ 00:10:46.928 START TEST nvme_fio 00:10:46.928 ************************************ 00:10:46.928 20:18:01 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:10:46.928 20:18:01 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:46.928 20:18:01 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:46.928 20:18:01 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:46.928 20:18:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:46.928 20:18:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:46.928 20:18:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:46.928 20:18:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:46.928 20:18:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:46.928 20:18:01 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:46.928 20:18:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:46.928 20:18:01 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:46.928 20:18:01 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:46.928 20:18:01 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:46.928 20:18:01 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:46.928 20:18:01 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:46.928 20:18:01 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:46.928 20:18:01 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:47.189 20:18:01 -- nvme/nvme.sh@41 -- # bs=4096 00:10:47.189 20:18:01 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:47.189 20:18:01 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:47.189 20:18:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:10:47.189 20:18:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:47.189 20:18:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:10:47.189 20:18:01 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:47.189 20:18:01 -- common/autotest_common.sh@1320 -- # shift 00:10:47.189 20:18:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:10:47.189 20:18:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:10:47.189 20:18:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:10:47.189 20:18:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:47.189 20:18:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:10:47.189 20:18:01 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:47.189 20:18:01 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:47.189 20:18:01 -- common/autotest_common.sh@1326 -- # break 00:10:47.189 20:18:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:47.189 20:18:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:47.450 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:47.450 fio-3.35 00:10:47.450 Starting 1 thread 00:10:54.038 00:10:54.038 test: (groupid=0, jobs=1): err= 0: pid=65030: Wed Oct 16 20:18:07 2024 00:10:54.038 read: IOPS=21.2k, BW=83.0MiB/s (87.0MB/s)(166MiB/2001msec) 00:10:54.038 slat (usec): min=4, max=182, avg= 5.77, stdev= 2.55 00:10:54.038 clat (usec): min=243, max=8809, avg=3010.56, stdev=985.02 00:10:54.038 lat (usec): min=248, max=8815, avg=3016.34, stdev=986.14 00:10:54.038 clat percentiles (usec): 00:10:54.038 | 1.00th=[ 2147], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2474], 00:10:54.038 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2737], 00:10:54.038 | 70.00th=[ 2868], 80.00th=[ 3228], 90.00th=[ 4293], 95.00th=[ 5473], 00:10:54.038 | 99.00th=[ 6718], 99.50th=[ 7242], 99.90th=[ 7963], 99.95th=[ 8160], 00:10:54.038 | 99.99th=[ 8356] 00:10:54.038 bw ( KiB/s): min=83352, max=87520, per=100.00%, avg=85546.67, stdev=2092.80, samples=3 00:10:54.038 iops : min=20838, max=21880, avg=21386.67, stdev=523.20, samples=3 00:10:54.038 write: IOPS=21.1k, BW=82.4MiB/s (86.4MB/s)(165MiB/2001msec); 0 zone resets 00:10:54.038 slat (nsec): min=4829, max=52040, avg=6150.53, stdev=2279.59 00:10:54.038 clat (usec): min=218, max=8977, avg=3013.91, stdev=976.57 00:10:54.038 lat (usec): min=224, max=8982, avg=3020.06, stdev=977.64 00:10:54.038 clat percentiles (usec): 00:10:54.038 | 1.00th=[ 2147], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2474], 00:10:54.038 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2737], 00:10:54.038 | 70.00th=[ 2868], 80.00th=[ 3228], 90.00th=[ 4359], 95.00th=[ 5473], 00:10:54.038 | 99.00th=[ 6652], 99.50th=[ 7242], 99.90th=[ 8029], 99.95th=[ 8160], 00:10:54.038 | 99.99th=[ 8455] 00:10:54.038 bw ( KiB/s): min=84208, max=87160, per=100.00%, avg=85701.33, stdev=1476.31, samples=3 00:10:54.038 iops : min=21052, max=21790, avg=21425.33, stdev=369.08, samples=3 00:10:54.038 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:54.038 lat (msec) : 2=0.32%, 4=87.77%, 10=11.88% 00:10:54.038 cpu : usr=99.05%, sys=0.10%, ctx=7, majf=0, minf=608 00:10:54.038 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:54.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.038 issued rwts: total=42495,42191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.038 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.038 00:10:54.038 Run status group 0 (all jobs): 00:10:54.038 READ: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=166MiB (174MB), run=2001-2001msec 00:10:54.038 WRITE: bw=82.4MiB/s (86.4MB/s), 82.4MiB/s-82.4MiB/s (86.4MB/s-86.4MB/s), io=165MiB (173MB), run=2001-2001msec 00:10:54.038 ----------------------------------------------------- 00:10:54.038 Suppressions used: 00:10:54.038 count bytes template 00:10:54.038 1 32 /usr/src/fio/parse.c 00:10:54.038 1 8 libtcmalloc_minimal.so 00:10:54.038 ----------------------------------------------------- 00:10:54.038 00:10:54.038 20:18:08 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:54.038 20:18:08 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:54.038 20:18:08 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:54.038 20:18:08 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:54.038 20:18:08 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:54.038 20:18:08 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:54.038 20:18:08 -- nvme/nvme.sh@41 -- # bs=4096 00:10:54.038 20:18:08 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:54.038 20:18:08 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:54.038 20:18:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:10:54.038 20:18:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:54.038 20:18:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:10:54.038 20:18:08 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:54.038 20:18:08 -- common/autotest_common.sh@1320 -- # shift 00:10:54.038 20:18:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:10:54.038 20:18:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:10:54.038 20:18:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:10:54.038 20:18:08 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:54.038 20:18:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:10:54.038 20:18:08 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:54.038 20:18:08 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:54.038 20:18:08 -- common/autotest_common.sh@1326 -- # break 00:10:54.038 20:18:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:54.038 20:18:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:54.038 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:54.038 fio-3.35 00:10:54.038 Starting 1 thread 00:10:59.349 00:10:59.349 test: (groupid=0, jobs=1): err= 0: pid=65117: Wed Oct 16 20:18:13 2024 00:10:59.349 read: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(120MiB/2001msec) 00:10:59.349 slat (nsec): min=4803, max=80326, avg=6628.74, stdev=3689.55 00:10:59.349 clat (usec): min=1486, max=12336, avg=4160.53, stdev=1421.06 00:10:59.349 lat (usec): min=1491, max=12406, avg=4167.16, stdev=1422.40 00:10:59.349 clat percentiles (usec): 00:10:59.349 | 1.00th=[ 2573], 5.00th=[ 2802], 10.00th=[ 2933], 20.00th=[ 3097], 00:10:59.349 | 30.00th=[ 3228], 40.00th=[ 3392], 50.00th=[ 3556], 60.00th=[ 3851], 00:10:59.349 | 70.00th=[ 4490], 80.00th=[ 5407], 90.00th=[ 6325], 95.00th=[ 7111], 00:10:59.349 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[10683], 99.95th=[11338], 00:10:59.349 | 99.99th=[12256] 00:10:59.349 bw ( KiB/s): min=54968, max=63504, per=96.06%, avg=58741.33, stdev=4353.15, samples=3 00:10:59.349 iops : min=13742, max=15876, avg=14685.33, stdev=1088.29, samples=3 00:10:59.349 write: IOPS=15.3k, BW=59.8MiB/s (62.7MB/s)(120MiB/2001msec); 0 zone resets 00:10:59.349 slat (nsec): min=4906, max=92229, avg=6930.46, stdev=3821.33 00:10:59.349 clat (usec): min=812, max=12270, avg=4175.57, stdev=1414.67 00:10:59.349 lat (usec): min=832, max=12282, avg=4182.50, stdev=1416.04 00:10:59.349 clat percentiles (usec): 00:10:59.349 | 1.00th=[ 2573], 5.00th=[ 2835], 10.00th=[ 2933], 20.00th=[ 3097], 00:10:59.349 | 30.00th=[ 3261], 40.00th=[ 3392], 50.00th=[ 3589], 60.00th=[ 3884], 00:10:59.349 | 70.00th=[ 4490], 80.00th=[ 5342], 90.00th=[ 6325], 95.00th=[ 7177], 00:10:59.349 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[10552], 99.95th=[11207], 00:10:59.349 | 99.99th=[11863] 00:10:59.349 bw ( KiB/s): min=54264, max=63784, per=95.62%, avg=58570.67, stdev=4824.33, samples=3 00:10:59.349 iops : min=13566, max=15946, avg=14642.67, stdev=1206.08, samples=3 00:10:59.349 lat (usec) : 1000=0.01% 00:10:59.349 lat (msec) : 2=0.16%, 4=62.82%, 10=36.83%, 20=0.20% 00:10:59.349 cpu : usr=98.60%, sys=0.05%, ctx=2, majf=0, minf=608 00:10:59.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:59.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.349 issued rwts: total=30592,30643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.349 00:10:59.349 Run status group 0 (all jobs): 00:10:59.349 READ: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=120MiB (125MB), run=2001-2001msec 00:10:59.349 WRITE: bw=59.8MiB/s (62.7MB/s), 59.8MiB/s-59.8MiB/s (62.7MB/s-62.7MB/s), io=120MiB (126MB), run=2001-2001msec 00:10:59.349 ----------------------------------------------------- 00:10:59.349 Suppressions used: 00:10:59.349 count bytes template 00:10:59.349 1 32 /usr/src/fio/parse.c 00:10:59.349 1 8 libtcmalloc_minimal.so 00:10:59.349 ----------------------------------------------------- 00:10:59.349 00:10:59.349 20:18:13 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:59.349 20:18:13 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:59.349 20:18:13 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:59.349 20:18:13 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:59.349 20:18:13 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:59.349 20:18:13 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:59.349 20:18:13 -- nvme/nvme.sh@41 -- # bs=4096 00:10:59.349 20:18:13 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:59.349 20:18:13 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:59.349 20:18:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:10:59.349 20:18:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:59.349 20:18:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:10:59.349 20:18:13 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:59.349 20:18:13 -- common/autotest_common.sh@1320 -- # shift 00:10:59.349 20:18:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:10:59.350 20:18:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:10:59.350 20:18:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:59.350 20:18:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:10:59.350 20:18:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:10:59.350 20:18:13 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:59.350 20:18:13 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:59.350 20:18:13 -- common/autotest_common.sh@1326 -- # break 00:10:59.350 20:18:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:59.350 20:18:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:59.350 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:59.350 fio-3.35 00:10:59.350 Starting 1 thread 00:11:04.614 00:11:04.615 test: (groupid=0, jobs=1): err= 0: pid=65190: Wed Oct 16 20:18:18 2024 00:11:04.615 read: IOPS=16.8k, BW=65.5MiB/s (68.7MB/s)(131MiB/2001msec) 00:11:04.615 slat (nsec): min=4195, max=61485, avg=5653.91, stdev=3056.79 00:11:04.615 clat (usec): min=233, max=11101, avg=3784.53, stdev=1489.87 00:11:04.615 lat (usec): min=238, max=11162, avg=3790.18, stdev=1491.20 00:11:04.615 clat percentiles (usec): 00:11:04.615 | 1.00th=[ 2114], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2540], 00:11:04.615 | 30.00th=[ 2671], 40.00th=[ 2835], 50.00th=[ 3130], 60.00th=[ 3654], 00:11:04.615 | 70.00th=[ 4555], 80.00th=[ 5342], 90.00th=[ 5997], 95.00th=[ 6521], 00:11:04.615 | 99.00th=[ 7767], 99.50th=[ 8291], 99.90th=[ 9241], 99.95th=[10290], 00:11:04.615 | 99.99th=[11076] 00:11:04.615 bw ( KiB/s): min=58552, max=78304, per=100.00%, avg=67289.67, stdev=10070.89, samples=3 00:11:04.615 iops : min=14638, max=19576, avg=16822.33, stdev=2517.75, samples=3 00:11:04.615 write: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(131MiB/2001msec); 0 zone resets 00:11:04.615 slat (nsec): min=4267, max=69443, avg=5914.48, stdev=3101.27 00:11:04.615 clat (usec): min=252, max=11035, avg=3807.59, stdev=1496.34 00:11:04.615 lat (usec): min=258, max=11048, avg=3813.51, stdev=1497.70 00:11:04.615 clat percentiles (usec): 00:11:04.615 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:11:04.615 | 30.00th=[ 2671], 40.00th=[ 2835], 50.00th=[ 3130], 60.00th=[ 3654], 00:11:04.615 | 70.00th=[ 4621], 80.00th=[ 5407], 90.00th=[ 5997], 95.00th=[ 6587], 00:11:04.615 | 99.00th=[ 7767], 99.50th=[ 8225], 99.90th=[ 9503], 99.95th=[10421], 00:11:04.615 | 99.99th=[10945] 00:11:04.615 bw ( KiB/s): min=58008, max=78176, per=99.95%, avg=67207.00, stdev=10199.84, samples=3 00:11:04.615 iops : min=14502, max=19544, avg=16801.67, stdev=2549.98, samples=3 00:11:04.615 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:04.615 lat (msec) : 2=0.34%, 4=63.57%, 10=35.99%, 20=0.08% 00:11:04.615 cpu : usr=98.70%, sys=0.15%, ctx=7, majf=0, minf=608 00:11:04.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:04.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.615 issued rwts: total=33577,33637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.615 00:11:04.615 Run status group 0 (all jobs): 00:11:04.615 READ: bw=65.5MiB/s (68.7MB/s), 65.5MiB/s-65.5MiB/s (68.7MB/s-68.7MB/s), io=131MiB (138MB), run=2001-2001msec 00:11:04.615 WRITE: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=131MiB (138MB), run=2001-2001msec 00:11:04.615 ----------------------------------------------------- 00:11:04.615 Suppressions used: 00:11:04.615 count bytes template 00:11:04.615 1 32 /usr/src/fio/parse.c 00:11:04.615 1 8 libtcmalloc_minimal.so 00:11:04.615 ----------------------------------------------------- 00:11:04.615 00:11:04.615 20:18:19 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:04.615 20:18:19 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:04.615 20:18:19 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:04.615 20:18:19 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:04.615 20:18:19 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:04.615 20:18:19 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:04.615 20:18:19 -- nvme/nvme.sh@41 -- # bs=4096 00:11:04.615 20:18:19 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:04.615 20:18:19 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:04.615 20:18:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:04.615 20:18:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:04.615 20:18:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:04.615 20:18:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.615 20:18:19 -- common/autotest_common.sh@1320 -- # shift 00:11:04.615 20:18:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:04.615 20:18:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:04.615 20:18:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:04.615 20:18:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.615 20:18:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:04.615 20:18:19 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:04.615 20:18:19 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:04.615 20:18:19 -- common/autotest_common.sh@1326 -- # break 00:11:04.615 20:18:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:04.615 20:18:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:04.874 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:04.874 fio-3.35 00:11:04.874 Starting 1 thread 00:11:12.988 00:11:12.988 test: (groupid=0, jobs=1): err= 0: pid=65256: Wed Oct 16 20:18:27 2024 00:11:12.988 read: IOPS=16.8k, BW=65.8MiB/s (69.0MB/s)(132MiB/2001msec) 00:11:12.988 slat (nsec): min=3227, max=70635, avg=5618.21, stdev=3109.02 00:11:12.988 clat (usec): min=212, max=10804, avg=3776.20, stdev=1494.41 00:11:12.988 lat (usec): min=217, max=10809, avg=3781.81, stdev=1495.69 00:11:12.988 clat percentiles (usec): 00:11:12.988 | 1.00th=[ 2008], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2507], 00:11:12.988 | 30.00th=[ 2638], 40.00th=[ 2835], 50.00th=[ 3163], 60.00th=[ 3752], 00:11:12.988 | 70.00th=[ 4555], 80.00th=[ 5276], 90.00th=[ 6063], 95.00th=[ 6587], 00:11:12.988 | 99.00th=[ 7570], 99.50th=[ 8029], 99.90th=[ 9241], 99.95th=[ 9765], 00:11:12.988 | 99.99th=[10552] 00:11:12.988 bw ( KiB/s): min=66208, max=69056, per=100.00%, avg=67544.00, stdev=1432.13, samples=3 00:11:12.988 iops : min=16552, max=17264, avg=16886.00, stdev=358.03, samples=3 00:11:12.988 write: IOPS=16.9k, BW=66.0MiB/s (69.2MB/s)(132MiB/2001msec); 0 zone resets 00:11:12.988 slat (nsec): min=3392, max=89173, avg=5892.97, stdev=3125.68 00:11:12.988 clat (usec): min=196, max=11509, avg=3783.92, stdev=1501.18 00:11:12.988 lat (usec): min=201, max=11514, avg=3789.81, stdev=1502.45 00:11:12.988 clat percentiles (usec): 00:11:12.988 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2507], 00:11:12.988 | 30.00th=[ 2638], 40.00th=[ 2835], 50.00th=[ 3163], 60.00th=[ 3752], 00:11:12.988 | 70.00th=[ 4555], 80.00th=[ 5276], 90.00th=[ 6063], 95.00th=[ 6587], 00:11:12.988 | 99.00th=[ 7701], 99.50th=[ 8160], 99.90th=[ 9241], 99.95th=[ 9765], 00:11:12.988 | 99.99th=[10814] 00:11:12.988 bw ( KiB/s): min=66024, max=69040, per=99.86%, avg=67453.33, stdev=1514.14, samples=3 00:11:12.988 iops : min=16506, max=17260, avg=16863.33, stdev=378.54, samples=3 00:11:12.988 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:11:12.988 lat (msec) : 2=1.00%, 4=61.82%, 10=37.10%, 20=0.03% 00:11:12.988 cpu : usr=98.80%, sys=0.15%, ctx=3, majf=0, minf=606 00:11:12.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:12.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.988 issued rwts: total=33710,33789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.988 00:11:12.988 Run status group 0 (all jobs): 00:11:12.988 READ: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=132MiB (138MB), run=2001-2001msec 00:11:12.988 WRITE: bw=66.0MiB/s (69.2MB/s), 66.0MiB/s-66.0MiB/s (69.2MB/s-69.2MB/s), io=132MiB (138MB), run=2001-2001msec 00:11:12.988 ----------------------------------------------------- 00:11:12.988 Suppressions used: 00:11:12.988 count bytes template 00:11:12.988 1 32 /usr/src/fio/parse.c 00:11:12.988 1 8 libtcmalloc_minimal.so 00:11:12.988 ----------------------------------------------------- 00:11:12.988 00:11:12.988 ************************************ 00:11:12.988 END TEST nvme_fio 00:11:12.988 ************************************ 00:11:12.988 20:18:27 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:12.988 20:18:27 -- nvme/nvme.sh@46 -- # true 00:11:12.988 00:11:12.988 real 0m26.264s 00:11:12.988 user 0m15.988s 00:11:12.988 sys 0m18.594s 00:11:12.988 20:18:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.988 20:18:27 -- common/autotest_common.sh@10 -- # set +x 00:11:12.988 ************************************ 00:11:12.988 END TEST nvme 00:11:12.988 ************************************ 00:11:12.988 00:11:12.988 real 1m41.267s 00:11:12.988 user 3m42.279s 00:11:12.988 sys 0m29.478s 00:11:12.988 20:18:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.988 20:18:27 -- common/autotest_common.sh@10 -- # set +x 00:11:12.988 20:18:27 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:12.988 20:18:27 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:12.988 20:18:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:12.988 20:18:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:12.988 20:18:27 -- common/autotest_common.sh@10 -- # set +x 00:11:12.988 ************************************ 00:11:12.988 START TEST nvme_scc 00:11:12.988 ************************************ 00:11:12.988 20:18:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:12.988 * Looking for test storage... 00:11:12.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:12.988 20:18:27 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:12.988 20:18:27 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:12.988 20:18:27 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:13.247 20:18:27 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:13.247 20:18:27 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:13.247 20:18:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.247 20:18:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.247 20:18:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.247 20:18:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.247 20:18:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.247 20:18:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.247 20:18:27 -- paths/export.sh@5 -- # export PATH 00:11:13.247 20:18:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.247 20:18:27 -- nvme/functions.sh@10 -- # ctrls=() 00:11:13.247 20:18:27 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:13.247 20:18:27 -- nvme/functions.sh@11 -- # nvmes=() 00:11:13.247 20:18:27 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:13.247 20:18:27 -- nvme/functions.sh@12 -- # bdfs=() 00:11:13.247 20:18:27 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:13.247 20:18:27 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:13.247 20:18:27 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:13.247 20:18:27 -- nvme/functions.sh@14 -- # nvme_name= 00:11:13.247 20:18:27 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:13.247 20:18:27 -- nvme/nvme_scc.sh@12 -- # uname 00:11:13.247 20:18:27 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:13.247 20:18:27 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:13.247 20:18:27 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:13.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:13.505 Waiting for block devices as requested 00:11:13.763 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:13.763 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:13.763 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:13.764 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.035 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:19.035 20:18:33 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:19.035 20:18:33 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:19.035 20:18:33 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:19.035 20:18:33 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:19.035 20:18:33 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:19.035 20:18:33 -- scripts/common.sh@15 -- # local i 00:11:19.035 20:18:33 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:19.035 20:18:33 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:19.035 20:18:33 -- scripts/common.sh@24 -- # return 0 00:11:19.035 20:18:33 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:19.035 20:18:33 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:19.035 20:18:33 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@18 -- # shift 00:11:19.035 20:18:33 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.035 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:19.035 20:18:33 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.035 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.036 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.036 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.036 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:19.037 20:18:33 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.037 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.037 20:18:33 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:19.037 20:18:33 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:19.037 20:18:33 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:19.037 20:18:33 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:19.037 20:18:33 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:19.037 20:18:33 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:19.037 20:18:33 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:19.037 20:18:33 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:19.037 20:18:33 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:19.037 20:18:33 -- scripts/common.sh@15 -- # local i 00:11:19.038 20:18:33 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:19.038 20:18:33 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:19.038 20:18:33 -- scripts/common.sh@24 -- # return 0 00:11:19.038 20:18:33 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:19.038 20:18:33 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:19.038 20:18:33 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@18 -- # shift 00:11:19.038 20:18:33 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.038 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:19.038 20:18:33 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.038 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:19.039 20:18:33 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.039 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.039 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:19.040 20:18:33 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:19.040 20:18:33 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:19.040 20:18:33 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:19.040 20:18:33 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@18 -- # shift 00:11:19.040 20:18:33 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.040 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.040 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:19.040 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.041 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.041 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:19.041 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:19.042 20:18:33 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:19.042 20:18:33 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:19.042 20:18:33 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:19.042 20:18:33 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@18 -- # shift 00:11:19.042 20:18:33 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.042 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:19.042 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:19.042 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:19.043 20:18:33 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:19.043 20:18:33 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:19.043 20:18:33 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:19.043 20:18:33 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@18 -- # shift 00:11:19.043 20:18:33 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.043 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.043 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.043 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.044 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.044 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:19.044 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:19.045 20:18:33 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:19.045 20:18:33 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:19.045 20:18:33 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:19.045 20:18:33 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:19.045 20:18:33 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:19.045 20:18:33 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:19.045 20:18:33 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:19.045 20:18:33 -- scripts/common.sh@15 -- # local i 00:11:19.045 20:18:33 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:19.045 20:18:33 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:19.045 20:18:33 -- scripts/common.sh@24 -- # return 0 00:11:19.045 20:18:33 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:19.045 20:18:33 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:19.045 20:18:33 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@18 -- # shift 00:11:19.045 20:18:33 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.045 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:19.045 20:18:33 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.045 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:19.046 20:18:33 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.046 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.046 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.308 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:19.308 20:18:33 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.308 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:19.309 20:18:33 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:19.309 20:18:33 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:19.309 20:18:33 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:19.309 20:18:33 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@18 -- # shift 00:11:19.309 20:18:33 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.309 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:19.309 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:19.309 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:19.310 20:18:33 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:19.310 20:18:33 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:19.310 20:18:33 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:19.310 20:18:33 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:19.310 20:18:33 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:19.310 20:18:33 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:19.310 20:18:33 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:19.310 20:18:33 -- scripts/common.sh@15 -- # local i 00:11:19.310 20:18:33 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:19.310 20:18:33 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:19.310 20:18:33 -- scripts/common.sh@24 -- # return 0 00:11:19.310 20:18:33 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:19.310 20:18:33 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:19.310 20:18:33 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@18 -- # shift 00:11:19.310 20:18:33 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:19.310 20:18:33 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.310 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.310 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:33 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:33 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:33 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.311 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.311 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.311 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.312 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:19.312 20:18:34 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:19.312 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:19.313 20:18:34 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:19.313 20:18:34 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:19.313 20:18:34 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:19.313 20:18:34 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@18 -- # shift 00:11:19.313 20:18:34 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.313 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.313 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:19.313 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:19.314 20:18:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:19.314 20:18:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.314 20:18:34 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:19.314 20:18:34 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:19.314 20:18:34 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:19.314 20:18:34 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:19.314 20:18:34 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:19.314 20:18:34 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:19.314 20:18:34 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:19.314 20:18:34 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:19.314 20:18:34 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:19.314 20:18:34 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:19.315 20:18:34 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:19.315 20:18:34 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:19.315 20:18:34 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:19.315 20:18:34 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:19.315 20:18:34 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:19.315 20:18:34 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:19.315 20:18:34 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:19.315 20:18:34 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:19.315 20:18:34 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:19.315 20:18:34 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:19.315 20:18:34 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:19.315 20:18:34 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:19.315 20:18:34 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:19.315 20:18:34 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:19.315 20:18:34 -- nvme/functions.sh@197 -- # echo nvme1 00:11:19.315 20:18:34 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:19.315 20:18:34 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:19.315 20:18:34 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:19.315 20:18:34 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:19.315 20:18:34 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:19.315 20:18:34 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:19.315 20:18:34 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:19.315 20:18:34 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:19.315 20:18:34 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:19.315 20:18:34 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:19.315 20:18:34 -- nvme/functions.sh@197 -- # echo nvme0 00:11:19.315 20:18:34 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:19.315 20:18:34 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:19.315 20:18:34 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:19.315 20:18:34 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:19.315 20:18:34 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:19.315 20:18:34 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:19.315 20:18:34 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:19.315 20:18:34 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:19.315 20:18:34 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:19.315 20:18:34 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:19.315 20:18:34 -- nvme/functions.sh@197 -- # echo nvme3 00:11:19.315 20:18:34 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:19.315 20:18:34 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:19.315 20:18:34 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:19.315 20:18:34 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:19.315 20:18:34 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:19.315 20:18:34 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:19.315 20:18:34 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:19.315 20:18:34 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:19.315 20:18:34 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:19.315 20:18:34 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:19.315 20:18:34 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:19.315 20:18:34 -- nvme/functions.sh@197 -- # echo nvme2 00:11:19.315 20:18:34 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:19.315 20:18:34 -- nvme/functions.sh@206 -- # echo nvme1 00:11:19.315 20:18:34 -- nvme/functions.sh@207 -- # return 0 00:11:19.315 20:18:34 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:19.315 20:18:34 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:19.315 20:18:34 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:20.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:20.249 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:20.249 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:20.249 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:20.249 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:20.249 20:18:35 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:20.249 20:18:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:20.249 20:18:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.249 20:18:35 -- common/autotest_common.sh@10 -- # set +x 00:11:20.249 ************************************ 00:11:20.249 START TEST nvme_simple_copy 00:11:20.249 ************************************ 00:11:20.249 20:18:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:20.507 Initializing NVMe Controllers 00:11:20.507 Attaching to 0000:00:08.0 00:11:20.507 Controller supports SCC. Attached to 0000:00:08.0 00:11:20.507 Namespace ID: 1 size: 4GB 00:11:20.507 Initialization complete. 00:11:20.507 00:11:20.508 Controller QEMU NVMe Ctrl (12342 ) 00:11:20.508 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:20.508 Namespace Block Size:4096 00:11:20.508 Writing LBAs 0 to 63 with Random Data 00:11:20.508 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:20.508 LBAs matching Written Data: 64 00:11:20.508 00:11:20.508 real 0m0.257s 00:11:20.508 user 0m0.088s 00:11:20.508 sys 0m0.068s 00:11:20.508 ************************************ 00:11:20.508 END TEST nvme_simple_copy 00:11:20.508 ************************************ 00:11:20.508 20:18:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.508 20:18:35 -- common/autotest_common.sh@10 -- # set +x 00:11:20.766 00:11:20.766 real 0m7.625s 00:11:20.766 user 0m0.975s 00:11:20.766 sys 0m1.403s 00:11:20.766 20:18:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.766 ************************************ 00:11:20.766 END TEST nvme_scc 00:11:20.766 ************************************ 00:11:20.766 20:18:35 -- common/autotest_common.sh@10 -- # set +x 00:11:20.766 20:18:35 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:11:20.766 20:18:35 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:11:20.766 20:18:35 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:11:20.766 20:18:35 -- spdk/autotest.sh@238 -- # [[ 1 -eq 1 ]] 00:11:20.766 20:18:35 -- spdk/autotest.sh@239 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:20.766 20:18:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:20.766 20:18:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.766 20:18:35 -- common/autotest_common.sh@10 -- # set +x 00:11:20.766 ************************************ 00:11:20.766 START TEST nvme_fdp 00:11:20.766 ************************************ 00:11:20.766 20:18:35 -- common/autotest_common.sh@1104 -- # test/nvme/nvme_fdp.sh 00:11:20.766 * Looking for test storage... 00:11:20.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:20.766 20:18:35 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:20.766 20:18:35 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:20.766 20:18:35 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:20.766 20:18:35 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:20.766 20:18:35 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.766 20:18:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.766 20:18:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.766 20:18:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.766 20:18:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.766 20:18:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.766 20:18:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.766 20:18:35 -- paths/export.sh@5 -- # export PATH 00:11:20.766 20:18:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.766 20:18:35 -- nvme/functions.sh@10 -- # ctrls=() 00:11:20.766 20:18:35 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:20.766 20:18:35 -- nvme/functions.sh@11 -- # nvmes=() 00:11:20.766 20:18:35 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:20.766 20:18:35 -- nvme/functions.sh@12 -- # bdfs=() 00:11:20.766 20:18:35 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:20.766 20:18:35 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:20.766 20:18:35 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:20.766 20:18:35 -- nvme/functions.sh@14 -- # nvme_name= 00:11:20.766 20:18:35 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.766 20:18:35 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:21.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:21.332 Waiting for block devices as requested 00:11:21.332 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:21.333 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:21.333 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:21.591 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:26.866 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:26.866 20:18:41 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:26.866 20:18:41 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:26.866 20:18:41 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:26.866 20:18:41 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:26.866 20:18:41 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:26.866 20:18:41 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:26.866 20:18:41 -- scripts/common.sh@15 -- # local i 00:11:26.866 20:18:41 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:26.866 20:18:41 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:26.866 20:18:41 -- scripts/common.sh@24 -- # return 0 00:11:26.867 20:18:41 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:26.867 20:18:41 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:26.867 20:18:41 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@18 -- # shift 00:11:26.867 20:18:41 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:26.867 20:18:41 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.867 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.867 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.868 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:26.868 20:18:41 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:26.868 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:26.869 20:18:41 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:26.869 20:18:41 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:26.869 20:18:41 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:26.869 20:18:41 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:26.869 20:18:41 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:26.869 20:18:41 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:26.869 20:18:41 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:26.869 20:18:41 -- scripts/common.sh@15 -- # local i 00:11:26.869 20:18:41 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:26.869 20:18:41 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:26.869 20:18:41 -- scripts/common.sh@24 -- # return 0 00:11:26.869 20:18:41 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:26.869 20:18:41 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:26.869 20:18:41 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@18 -- # shift 00:11:26.869 20:18:41 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.869 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:26.869 20:18:41 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:26.869 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:26.870 20:18:41 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.870 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.870 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.871 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.871 20:18:41 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:26.871 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:26.872 20:18:41 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:26.872 20:18:41 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:26.872 20:18:41 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:26.872 20:18:41 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@18 -- # shift 00:11:26.872 20:18:41 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.872 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.872 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.872 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:26.873 20:18:41 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:26.873 20:18:41 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:26.873 20:18:41 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:26.873 20:18:41 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@18 -- # shift 00:11:26.873 20:18:41 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:26.873 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.873 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.873 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:26.874 20:18:41 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:26.874 20:18:41 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:26.874 20:18:41 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:26.874 20:18:41 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:26.874 20:18:41 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:26.874 20:18:41 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@18 -- # shift 00:11:26.874 20:18:41 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.874 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.874 20:18:41 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:26.874 20:18:41 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.875 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:26.875 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.875 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:26.876 20:18:41 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:26.876 20:18:41 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:26.876 20:18:41 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:26.876 20:18:41 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:26.876 20:18:41 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:26.876 20:18:41 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:26.876 20:18:41 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:26.876 20:18:41 -- scripts/common.sh@15 -- # local i 00:11:26.876 20:18:41 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:26.876 20:18:41 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:26.876 20:18:41 -- scripts/common.sh@24 -- # return 0 00:11:26.876 20:18:41 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:26.876 20:18:41 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:26.876 20:18:41 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@18 -- # shift 00:11:26.876 20:18:41 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.876 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:26.876 20:18:41 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.876 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.877 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.877 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.877 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:26.878 20:18:41 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.878 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.878 20:18:41 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:26.879 20:18:41 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:26.879 20:18:41 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:26.879 20:18:41 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:26.879 20:18:41 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@18 -- # shift 00:11:26.879 20:18:41 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.879 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:26.879 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:26.879 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:26.880 20:18:41 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:26.880 20:18:41 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:26.880 20:18:41 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:26.880 20:18:41 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:26.880 20:18:41 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:26.880 20:18:41 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:26.880 20:18:41 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:26.880 20:18:41 -- scripts/common.sh@15 -- # local i 00:11:26.880 20:18:41 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:26.880 20:18:41 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:26.880 20:18:41 -- scripts/common.sh@24 -- # return 0 00:11:26.880 20:18:41 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:26.880 20:18:41 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:26.880 20:18:41 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@18 -- # shift 00:11:26.880 20:18:41 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.880 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.880 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:26.880 20:18:41 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.881 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.881 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.881 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:26.882 20:18:41 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.882 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.882 20:18:41 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:26.882 20:18:41 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:26.882 20:18:41 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:26.883 20:18:41 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:26.883 20:18:41 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@18 -- # shift 00:11:26.883 20:18:41 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:26.883 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.883 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.883 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.884 20:18:41 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.884 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.884 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.884 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.884 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.884 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.884 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.884 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.884 20:18:41 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:26.884 20:18:41 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # IFS=: 00:11:26.884 20:18:41 -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.884 20:18:41 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:26.884 20:18:41 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:26.884 20:18:41 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:26.884 20:18:41 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:26.884 20:18:41 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:26.884 20:18:41 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:26.884 20:18:41 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:26.884 20:18:41 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:26.884 20:18:41 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:26.884 20:18:41 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:26.884 20:18:41 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:26.884 20:18:41 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:26.884 20:18:41 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:26.884 20:18:41 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:26.884 20:18:41 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:26.884 20:18:41 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:26.884 20:18:41 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:26.884 20:18:41 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:26.884 20:18:41 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:26.884 20:18:41 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:26.884 20:18:41 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:26.884 20:18:41 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:26.884 20:18:41 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:26.884 20:18:41 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:26.884 20:18:41 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:26.884 20:18:41 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:26.884 20:18:41 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:26.884 20:18:41 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:26.884 20:18:41 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:26.884 20:18:41 -- nvme/functions.sh@197 -- # echo nvme0 00:11:26.884 20:18:41 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:26.884 20:18:41 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:26.884 20:18:41 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:26.884 20:18:41 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:26.884 20:18:41 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:26.884 20:18:41 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:26.884 20:18:41 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:26.884 20:18:41 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:26.884 20:18:41 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:26.884 20:18:41 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:26.884 20:18:41 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:26.884 20:18:41 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:26.884 20:18:41 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:26.884 20:18:41 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:26.884 20:18:41 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:26.884 20:18:41 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:26.884 20:18:41 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:26.884 20:18:41 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:26.884 20:18:41 -- nvme/functions.sh@204 -- # trap - ERR 00:11:26.884 20:18:41 -- nvme/functions.sh@204 -- # print_backtrace 00:11:26.884 20:18:41 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:11:26.884 20:18:41 -- common/autotest_common.sh@1132 -- # return 0 00:11:26.884 20:18:41 -- nvme/functions.sh@204 -- # trap - ERR 00:11:26.884 20:18:41 -- nvme/functions.sh@204 -- # print_backtrace 00:11:26.884 20:18:41 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:11:26.884 20:18:41 -- common/autotest_common.sh@1132 -- # return 0 00:11:26.884 20:18:41 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:26.884 20:18:41 -- nvme/functions.sh@206 -- # echo nvme0 00:11:26.884 20:18:41 -- nvme/functions.sh@207 -- # return 0 00:11:26.884 20:18:41 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:26.884 20:18:41 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:26.884 20:18:41 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:27.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:27.823 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:27.823 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:27.823 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:27.823 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:28.084 20:18:42 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:28.084 20:18:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:28.084 20:18:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.084 20:18:42 -- common/autotest_common.sh@10 -- # set +x 00:11:28.084 ************************************ 00:11:28.084 START TEST nvme_flexible_data_placement 00:11:28.084 ************************************ 00:11:28.084 20:18:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:28.346 Initializing NVMe Controllers 00:11:28.346 Attaching to 0000:00:09.0 00:11:28.346 Controller supports FDP Attached to 0000:00:09.0 00:11:28.346 Namespace ID: 1 Endurance Group ID: 1 00:11:28.346 Initialization complete. 00:11:28.346 00:11:28.346 ================================== 00:11:28.346 == FDP tests for Namespace: #01 == 00:11:28.346 ================================== 00:11:28.346 00:11:28.346 Get Feature: FDP: 00:11:28.346 ================= 00:11:28.346 Enabled: Yes 00:11:28.346 FDP configuration Index: 0 00:11:28.346 00:11:28.346 FDP configurations log page 00:11:28.346 =========================== 00:11:28.346 Number of FDP configurations: 1 00:11:28.346 Version: 0 00:11:28.346 Size: 112 00:11:28.346 FDP Configuration Descriptor: 0 00:11:28.346 Descriptor Size: 96 00:11:28.346 Reclaim Group Identifier format: 2 00:11:28.346 FDP Volatile Write Cache: Not Present 00:11:28.346 FDP Configuration: Valid 00:11:28.346 Vendor Specific Size: 0 00:11:28.346 Number of Reclaim Groups: 2 00:11:28.346 Number of Recalim Unit Handles: 8 00:11:28.346 Max Placement Identifiers: 128 00:11:28.346 Number of Namespaces Suppprted: 256 00:11:28.346 Reclaim unit Nominal Size: 6000000 bytes 00:11:28.346 Estimated Reclaim Unit Time Limit: Not Reported 00:11:28.346 RUH Desc #000: RUH Type: Initially Isolated 00:11:28.346 RUH Desc #001: RUH Type: Initially Isolated 00:11:28.346 RUH Desc #002: RUH Type: Initially Isolated 00:11:28.346 RUH Desc #003: RUH Type: Initially Isolated 00:11:28.346 RUH Desc #004: RUH Type: Initially Isolated 00:11:28.346 RUH Desc #005: RUH Type: Initially Isolated 00:11:28.346 RUH Desc #006: RUH Type: Initially Isolated 00:11:28.346 RUH Desc #007: RUH Type: Initially Isolated 00:11:28.346 00:11:28.346 FDP reclaim unit handle usage log page 00:11:28.346 ====================================== 00:11:28.346 Number of Reclaim Unit Handles: 8 00:11:28.346 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:28.346 RUH Usage Desc #001: RUH Attributes: Unused 00:11:28.346 RUH Usage Desc #002: RUH Attributes: Unused 00:11:28.346 RUH Usage Desc #003: RUH Attributes: Unused 00:11:28.346 RUH Usage Desc #004: RUH Attributes: Unused 00:11:28.346 RUH Usage Desc #005: RUH Attributes: Unused 00:11:28.346 RUH Usage Desc #006: RUH Attributes: Unused 00:11:28.346 RUH Usage Desc #007: RUH Attributes: Unused 00:11:28.346 00:11:28.346 FDP statistics log page 00:11:28.346 ======================= 00:11:28.346 Host bytes with metadata written: 774844416 00:11:28.346 Media bytes with metadata written: 774991872 00:11:28.346 Media bytes erased: 0 00:11:28.346 00:11:28.346 FDP Reclaim unit handle status 00:11:28.346 ============================== 00:11:28.346 Number of RUHS descriptors: 2 00:11:28.346 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001d0d 00:11:28.346 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:28.346 00:11:28.346 FDP write on placement id: 0 success 00:11:28.346 00:11:28.346 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:28.346 00:11:28.346 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:28.346 00:11:28.346 Get Feature: FDP Events for Placement handle: #0 00:11:28.346 ======================== 00:11:28.346 Number of FDP Events: 6 00:11:28.346 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:28.346 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:28.346 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:28.346 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:28.346 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:28.346 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:28.346 00:11:28.346 FDP events log page 00:11:28.346 =================== 00:11:28.346 Number of FDP events: 1 00:11:28.346 FDP Event #0: 00:11:28.346 Event Type: RU Not Written to Capacity 00:11:28.346 Placement Identifier: Valid 00:11:28.346 NSID: Valid 00:11:28.346 Location: Valid 00:11:28.346 Placement Identifier: 0 00:11:28.346 Event Timestamp: b 00:11:28.346 Namespace Identifier: 1 00:11:28.346 Reclaim Group Identifier: 0 00:11:28.346 Reclaim Unit Handle Identifier: 0 00:11:28.346 00:11:28.346 FDP test passed 00:11:28.346 00:11:28.346 real 0m0.259s 00:11:28.346 user 0m0.076s 00:11:28.346 sys 0m0.081s 00:11:28.346 ************************************ 00:11:28.346 END TEST nvme_flexible_data_placement 00:11:28.346 ************************************ 00:11:28.346 20:18:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.346 20:18:43 -- common/autotest_common.sh@10 -- # set +x 00:11:28.346 00:11:28.346 real 0m7.555s 00:11:28.346 user 0m0.935s 00:11:28.346 sys 0m1.437s 00:11:28.346 20:18:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.346 20:18:43 -- common/autotest_common.sh@10 -- # set +x 00:11:28.346 ************************************ 00:11:28.346 END TEST nvme_fdp 00:11:28.346 ************************************ 00:11:28.346 20:18:43 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:11:28.346 20:18:43 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:28.346 20:18:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:28.346 20:18:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.346 20:18:43 -- common/autotest_common.sh@10 -- # set +x 00:11:28.346 ************************************ 00:11:28.346 START TEST nvme_rpc 00:11:28.346 ************************************ 00:11:28.346 20:18:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:28.346 * Looking for test storage... 00:11:28.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:28.346 20:18:43 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:28.346 20:18:43 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:28.346 20:18:43 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:28.346 20:18:43 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:28.346 20:18:43 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:28.346 20:18:43 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:28.346 20:18:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:28.346 20:18:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:28.346 20:18:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:28.346 20:18:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:28.346 20:18:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:28.608 20:18:43 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:28.608 20:18:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:28.608 20:18:43 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:11:28.608 20:18:43 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:28.608 20:18:43 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66701 00:11:28.608 20:18:43 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:28.608 20:18:43 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66701 00:11:28.608 20:18:43 -- common/autotest_common.sh@819 -- # '[' -z 66701 ']' 00:11:28.608 20:18:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.608 20:18:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:28.608 20:18:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.608 20:18:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:28.608 20:18:43 -- common/autotest_common.sh@10 -- # set +x 00:11:28.608 20:18:43 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:28.608 [2024-10-16 20:18:43.361969] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:28.608 [2024-10-16 20:18:43.362134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66701 ] 00:11:28.608 [2024-10-16 20:18:43.516545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.869 [2024-10-16 20:18:43.786447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:28.869 [2024-10-16 20:18:43.787010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.869 [2024-10-16 20:18:43.787029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.254 20:18:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:30.254 20:18:44 -- common/autotest_common.sh@852 -- # return 0 00:11:30.254 20:18:44 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:30.515 Nvme0n1 00:11:30.515 20:18:45 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:30.515 20:18:45 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:30.515 request: 00:11:30.515 { 00:11:30.515 "filename": "non_existing_file", 00:11:30.515 "bdev_name": "Nvme0n1", 00:11:30.515 "method": "bdev_nvme_apply_firmware", 00:11:30.515 "req_id": 1 00:11:30.515 } 00:11:30.515 Got JSON-RPC error response 00:11:30.515 response: 00:11:30.515 { 00:11:30.515 "code": -32603, 00:11:30.515 "message": "open file failed." 00:11:30.515 } 00:11:30.515 20:18:45 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:30.515 20:18:45 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:30.515 20:18:45 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:30.776 20:18:45 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:30.776 20:18:45 -- nvme/nvme_rpc.sh@40 -- # killprocess 66701 00:11:30.776 20:18:45 -- common/autotest_common.sh@926 -- # '[' -z 66701 ']' 00:11:30.776 20:18:45 -- common/autotest_common.sh@930 -- # kill -0 66701 00:11:30.776 20:18:45 -- common/autotest_common.sh@931 -- # uname 00:11:30.776 20:18:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:30.776 20:18:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66701 00:11:30.776 20:18:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:30.776 20:18:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:30.776 killing process with pid 66701 00:11:30.776 20:18:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66701' 00:11:30.776 20:18:45 -- common/autotest_common.sh@945 -- # kill 66701 00:11:30.776 20:18:45 -- common/autotest_common.sh@950 -- # wait 66701 00:11:32.162 ************************************ 00:11:32.162 END TEST nvme_rpc 00:11:32.162 ************************************ 00:11:32.162 00:11:32.162 real 0m3.745s 00:11:32.162 user 0m7.068s 00:11:32.162 sys 0m0.697s 00:11:32.162 20:18:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.162 20:18:46 -- common/autotest_common.sh@10 -- # set +x 00:11:32.162 20:18:46 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:32.162 20:18:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:32.162 20:18:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.162 20:18:46 -- common/autotest_common.sh@10 -- # set +x 00:11:32.162 ************************************ 00:11:32.162 START TEST nvme_rpc_timeouts 00:11:32.162 ************************************ 00:11:32.162 20:18:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:32.162 * Looking for test storage... 00:11:32.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:32.162 20:18:47 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.162 20:18:47 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66773 00:11:32.162 20:18:47 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66773 00:11:32.162 20:18:47 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66796 00:11:32.162 20:18:47 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:32.162 20:18:47 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66796 00:11:32.162 20:18:47 -- common/autotest_common.sh@819 -- # '[' -z 66796 ']' 00:11:32.162 20:18:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.162 20:18:47 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:32.162 20:18:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:32.162 20:18:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.162 20:18:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:32.162 20:18:47 -- common/autotest_common.sh@10 -- # set +x 00:11:32.427 [2024-10-16 20:18:47.091279] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:32.427 [2024-10-16 20:18:47.091786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66796 ] 00:11:32.427 [2024-10-16 20:18:47.242371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:32.693 [2024-10-16 20:18:47.383858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:32.693 [2024-10-16 20:18:47.384133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.693 [2024-10-16 20:18:47.384142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.265 20:18:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:33.265 20:18:47 -- common/autotest_common.sh@852 -- # return 0 00:11:33.265 Checking default timeout settings: 00:11:33.265 20:18:47 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:33.265 20:18:47 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:33.526 Making settings changes with rpc: 00:11:33.526 20:18:48 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:33.526 20:18:48 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:33.526 Check default vs. modified settings: 00:11:33.526 20:18:48 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:33.526 20:18:48 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66773 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66773 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:33.788 Setting action_on_timeout is changed as expected. 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66773 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66773 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:33.788 Setting timeout_us is changed as expected. 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66773 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66773 00:11:33.788 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:34.049 20:18:48 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:34.049 Setting timeout_admin_us is changed as expected. 00:11:34.049 20:18:48 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:34.049 20:18:48 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:34.049 20:18:48 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:34.049 20:18:48 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66773 /tmp/settings_modified_66773 00:11:34.049 20:18:48 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66796 00:11:34.049 20:18:48 -- common/autotest_common.sh@926 -- # '[' -z 66796 ']' 00:11:34.049 20:18:48 -- common/autotest_common.sh@930 -- # kill -0 66796 00:11:34.049 20:18:48 -- common/autotest_common.sh@931 -- # uname 00:11:34.049 20:18:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:34.049 20:18:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66796 00:11:34.049 20:18:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:34.049 20:18:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:34.049 20:18:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66796' 00:11:34.049 killing process with pid 66796 00:11:34.049 20:18:48 -- common/autotest_common.sh@945 -- # kill 66796 00:11:34.049 20:18:48 -- common/autotest_common.sh@950 -- # wait 66796 00:11:34.991 RPC TIMEOUT SETTING TEST PASSED. 00:11:34.991 20:18:49 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:34.991 ************************************ 00:11:34.991 END TEST nvme_rpc_timeouts 00:11:34.991 ************************************ 00:11:34.991 00:11:34.991 real 0m2.958s 00:11:34.991 user 0m5.689s 00:11:34.991 sys 0m0.450s 00:11:34.991 20:18:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.991 20:18:49 -- common/autotest_common.sh@10 -- # set +x 00:11:35.252 20:18:49 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:11:35.252 20:18:49 -- spdk/autotest.sh@255 -- # [[ 1 -eq 1 ]] 00:11:35.252 20:18:49 -- spdk/autotest.sh@256 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:35.252 20:18:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:35.252 20:18:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:35.252 20:18:49 -- common/autotest_common.sh@10 -- # set +x 00:11:35.252 ************************************ 00:11:35.252 START TEST nvme_xnvme 00:11:35.252 ************************************ 00:11:35.252 20:18:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:35.252 * Looking for test storage... 00:11:35.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:35.252 20:18:50 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.252 20:18:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.252 20:18:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.252 20:18:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.252 20:18:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.252 20:18:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.252 20:18:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.252 20:18:50 -- paths/export.sh@5 -- # export PATH 00:11:35.252 20:18:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:35.252 20:18:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:35.252 20:18:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:35.252 20:18:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.252 ************************************ 00:11:35.252 START TEST xnvme_to_malloc_dd_copy 00:11:35.252 ************************************ 00:11:35.252 20:18:50 -- common/autotest_common.sh@1104 -- # malloc_to_xnvme_copy 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:35.252 20:18:50 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:35.252 20:18:50 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:35.252 20:18:50 -- dd/common.sh@191 -- # return 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@18 -- # local io 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:35.252 20:18:50 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:35.252 20:18:50 -- dd/common.sh@31 -- # xtrace_disable 00:11:35.252 20:18:50 -- common/autotest_common.sh@10 -- # set +x 00:11:35.252 { 00:11:35.252 "subsystems": [ 00:11:35.252 { 00:11:35.252 "subsystem": "bdev", 00:11:35.252 "config": [ 00:11:35.252 { 00:11:35.252 "params": { 00:11:35.252 "block_size": 512, 00:11:35.252 "num_blocks": 2097152, 00:11:35.252 "name": "malloc0" 00:11:35.252 }, 00:11:35.252 "method": "bdev_malloc_create" 00:11:35.252 }, 00:11:35.252 { 00:11:35.252 "params": { 00:11:35.252 "io_mechanism": "libaio", 00:11:35.252 "filename": "/dev/nullb0", 00:11:35.252 "name": "null0" 00:11:35.252 }, 00:11:35.252 "method": "bdev_xnvme_create" 00:11:35.252 }, 00:11:35.252 { 00:11:35.252 "method": "bdev_wait_for_examine" 00:11:35.252 } 00:11:35.252 ] 00:11:35.252 } 00:11:35.252 ] 00:11:35.252 } 00:11:35.252 [2024-10-16 20:18:50.129980] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:35.252 [2024-10-16 20:18:50.130102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66917 ] 00:11:35.513 [2024-10-16 20:18:50.278133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.513 [2024-10-16 20:18:50.425291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.460  [2024-10-16T20:18:53.332Z] Copying: 310/1024 [MB] (310 MBps) [2024-10-16T20:18:54.318Z] Copying: 621/1024 [MB] (311 MBps) [2024-10-16T20:18:54.579Z] Copying: 933/1024 [MB] (311 MBps) [2024-10-16T20:18:56.495Z] Copying: 1024/1024 [MB] (average 311 MBps) 00:11:41.566 00:11:41.566 20:18:56 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:41.566 20:18:56 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:41.566 20:18:56 -- dd/common.sh@31 -- # xtrace_disable 00:11:41.566 20:18:56 -- common/autotest_common.sh@10 -- # set +x 00:11:41.827 { 00:11:41.827 "subsystems": [ 00:11:41.827 { 00:11:41.827 "subsystem": "bdev", 00:11:41.827 "config": [ 00:11:41.827 { 00:11:41.827 "params": { 00:11:41.827 "block_size": 512, 00:11:41.827 "num_blocks": 2097152, 00:11:41.827 "name": "malloc0" 00:11:41.827 }, 00:11:41.827 "method": "bdev_malloc_create" 00:11:41.827 }, 00:11:41.827 { 00:11:41.827 "params": { 00:11:41.827 "io_mechanism": "libaio", 00:11:41.827 "filename": "/dev/nullb0", 00:11:41.827 "name": "null0" 00:11:41.827 }, 00:11:41.827 "method": "bdev_xnvme_create" 00:11:41.827 }, 00:11:41.827 { 00:11:41.827 "method": "bdev_wait_for_examine" 00:11:41.827 } 00:11:41.827 ] 00:11:41.827 } 00:11:41.827 ] 00:11:41.827 } 00:11:41.827 [2024-10-16 20:18:56.537350] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:41.827 [2024-10-16 20:18:56.537462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66993 ] 00:11:41.827 [2024-10-16 20:18:56.686145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.088 [2024-10-16 20:18:56.835627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.000  [2024-10-16T20:18:59.870Z] Copying: 312/1024 [MB] (312 MBps) [2024-10-16T20:19:00.812Z] Copying: 625/1024 [MB] (312 MBps) [2024-10-16T20:19:01.075Z] Copying: 937/1024 [MB] (312 MBps) [2024-10-16T20:19:02.986Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:11:48.057 00:11:48.057 20:19:02 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:48.057 20:19:02 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:48.057 20:19:02 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:48.057 20:19:02 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:48.057 20:19:02 -- dd/common.sh@31 -- # xtrace_disable 00:11:48.057 20:19:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.057 { 00:11:48.057 "subsystems": [ 00:11:48.057 { 00:11:48.057 "subsystem": "bdev", 00:11:48.057 "config": [ 00:11:48.057 { 00:11:48.057 "params": { 00:11:48.057 "block_size": 512, 00:11:48.058 "num_blocks": 2097152, 00:11:48.058 "name": "malloc0" 00:11:48.058 }, 00:11:48.058 "method": "bdev_malloc_create" 00:11:48.058 }, 00:11:48.058 { 00:11:48.058 "params": { 00:11:48.058 "io_mechanism": "io_uring", 00:11:48.058 "filename": "/dev/nullb0", 00:11:48.058 "name": "null0" 00:11:48.058 }, 00:11:48.058 "method": "bdev_xnvme_create" 00:11:48.058 }, 00:11:48.058 { 00:11:48.058 "method": "bdev_wait_for_examine" 00:11:48.058 } 00:11:48.058 ] 00:11:48.058 } 00:11:48.058 ] 00:11:48.058 } 00:11:48.058 [2024-10-16 20:19:02.920572] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:48.058 [2024-10-16 20:19:02.920683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67075 ] 00:11:48.318 [2024-10-16 20:19:03.069066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.318 [2024-10-16 20:19:03.209301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.231  [2024-10-16T20:19:06.102Z] Copying: 317/1024 [MB] (317 MBps) [2024-10-16T20:19:07.045Z] Copying: 634/1024 [MB] (317 MBps) [2024-10-16T20:19:07.356Z] Copying: 952/1024 [MB] (317 MBps) [2024-10-16T20:19:09.265Z] Copying: 1024/1024 [MB] (average 317 MBps) 00:11:54.336 00:11:54.336 20:19:09 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:54.336 20:19:09 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:54.336 20:19:09 -- dd/common.sh@31 -- # xtrace_disable 00:11:54.336 20:19:09 -- common/autotest_common.sh@10 -- # set +x 00:11:54.336 { 00:11:54.336 "subsystems": [ 00:11:54.336 { 00:11:54.336 "subsystem": "bdev", 00:11:54.336 "config": [ 00:11:54.336 { 00:11:54.336 "params": { 00:11:54.336 "block_size": 512, 00:11:54.336 "num_blocks": 2097152, 00:11:54.336 "name": "malloc0" 00:11:54.336 }, 00:11:54.336 "method": "bdev_malloc_create" 00:11:54.336 }, 00:11:54.336 { 00:11:54.336 "params": { 00:11:54.336 "io_mechanism": "io_uring", 00:11:54.336 "filename": "/dev/nullb0", 00:11:54.336 "name": "null0" 00:11:54.336 }, 00:11:54.336 "method": "bdev_xnvme_create" 00:11:54.336 }, 00:11:54.336 { 00:11:54.336 "method": "bdev_wait_for_examine" 00:11:54.337 } 00:11:54.337 ] 00:11:54.337 } 00:11:54.337 ] 00:11:54.337 } 00:11:54.337 [2024-10-16 20:19:09.204896] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:54.337 [2024-10-16 20:19:09.205008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67146 ] 00:11:54.595 [2024-10-16 20:19:09.353182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.595 [2024-10-16 20:19:09.494457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.495  [2024-10-16T20:19:12.360Z] Copying: 325/1024 [MB] (325 MBps) [2024-10-16T20:19:13.302Z] Copying: 652/1024 [MB] (326 MBps) [2024-10-16T20:19:13.562Z] Copying: 979/1024 [MB] (327 MBps) [2024-10-16T20:19:15.475Z] Copying: 1024/1024 [MB] (average 326 MBps) 00:12:00.546 00:12:00.546 20:19:15 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:00.546 20:19:15 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:00.546 00:12:00.546 real 0m25.359s 00:12:00.546 user 0m22.442s 00:12:00.546 sys 0m2.381s 00:12:00.546 20:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.546 ************************************ 00:12:00.546 END TEST xnvme_to_malloc_dd_copy 00:12:00.546 ************************************ 00:12:00.546 20:19:15 -- common/autotest_common.sh@10 -- # set +x 00:12:00.546 20:19:15 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:00.546 20:19:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:00.546 20:19:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:00.546 20:19:15 -- common/autotest_common.sh@10 -- # set +x 00:12:00.546 ************************************ 00:12:00.546 START TEST xnvme_bdevperf 00:12:00.546 ************************************ 00:12:00.546 20:19:15 -- common/autotest_common.sh@1104 -- # xnvme_bdevperf 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:00.806 20:19:15 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:00.806 20:19:15 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:00.806 20:19:15 -- dd/common.sh@191 -- # return 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@60 -- # local io 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:00.806 20:19:15 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:00.806 20:19:15 -- dd/common.sh@31 -- # xtrace_disable 00:12:00.806 20:19:15 -- common/autotest_common.sh@10 -- # set +x 00:12:00.806 { 00:12:00.806 "subsystems": [ 00:12:00.806 { 00:12:00.806 "subsystem": "bdev", 00:12:00.807 "config": [ 00:12:00.807 { 00:12:00.807 "params": { 00:12:00.807 "io_mechanism": "libaio", 00:12:00.807 "filename": "/dev/nullb0", 00:12:00.807 "name": "null0" 00:12:00.807 }, 00:12:00.807 "method": "bdev_xnvme_create" 00:12:00.807 }, 00:12:00.807 { 00:12:00.807 "method": "bdev_wait_for_examine" 00:12:00.807 } 00:12:00.807 ] 00:12:00.807 } 00:12:00.807 ] 00:12:00.807 } 00:12:00.807 [2024-10-16 20:19:15.567118] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:00.807 [2024-10-16 20:19:15.567230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67246 ] 00:12:00.807 [2024-10-16 20:19:15.716969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.088 [2024-10-16 20:19:15.875776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.375 Running I/O for 5 seconds... 00:12:06.658 00:12:06.658 Latency(us) 00:12:06.658 [2024-10-16T20:19:21.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.658 [2024-10-16T20:19:21.587Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:06.658 null0 : 5.00 208824.67 815.72 0.00 0.00 304.25 113.43 1392.64 00:12:06.658 [2024-10-16T20:19:21.587Z] =================================================================================================================== 00:12:06.658 [2024-10-16T20:19:21.587Z] Total : 208824.67 815.72 0.00 0.00 304.25 113.43 1392.64 00:12:06.919 20:19:21 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:06.919 20:19:21 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:06.919 20:19:21 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:06.919 20:19:21 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:06.919 20:19:21 -- dd/common.sh@31 -- # xtrace_disable 00:12:06.919 20:19:21 -- common/autotest_common.sh@10 -- # set +x 00:12:06.919 { 00:12:06.919 "subsystems": [ 00:12:06.919 { 00:12:06.919 "subsystem": "bdev", 00:12:06.919 "config": [ 00:12:06.919 { 00:12:06.919 "params": { 00:12:06.919 "io_mechanism": "io_uring", 00:12:06.919 "filename": "/dev/nullb0", 00:12:06.919 "name": "null0" 00:12:06.919 }, 00:12:06.919 "method": "bdev_xnvme_create" 00:12:06.919 }, 00:12:06.919 { 00:12:06.919 "method": "bdev_wait_for_examine" 00:12:06.919 } 00:12:06.919 ] 00:12:06.919 } 00:12:06.919 ] 00:12:06.919 } 00:12:06.919 [2024-10-16 20:19:21.773177] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:06.919 [2024-10-16 20:19:21.773712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67320 ] 00:12:07.180 [2024-10-16 20:19:21.921096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.180 [2024-10-16 20:19:22.073836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.441 Running I/O for 5 seconds... 00:12:12.713 00:12:12.713 Latency(us) 00:12:12.713 [2024-10-16T20:19:27.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.714 [2024-10-16T20:19:27.643Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:12.714 null0 : 5.00 238278.23 930.77 0.00 0.00 266.60 154.39 2029.10 00:12:12.714 [2024-10-16T20:19:27.643Z] =================================================================================================================== 00:12:12.714 [2024-10-16T20:19:27.643Z] Total : 238278.23 930.77 0.00 0.00 266.60 154.39 2029.10 00:12:12.974 20:19:27 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:12.974 20:19:27 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:13.235 00:12:13.235 real 0m12.433s 00:12:13.235 user 0m10.026s 00:12:13.235 sys 0m2.177s 00:12:13.235 20:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.235 20:19:27 -- common/autotest_common.sh@10 -- # set +x 00:12:13.235 ************************************ 00:12:13.236 END TEST xnvme_bdevperf 00:12:13.236 ************************************ 00:12:13.236 00:12:13.236 real 0m37.976s 00:12:13.236 user 0m32.524s 00:12:13.236 sys 0m4.651s 00:12:13.236 20:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.236 ************************************ 00:12:13.236 END TEST nvme_xnvme 00:12:13.236 ************************************ 00:12:13.236 20:19:27 -- common/autotest_common.sh@10 -- # set +x 00:12:13.236 20:19:27 -- spdk/autotest.sh@257 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:13.236 20:19:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:13.236 20:19:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:13.236 20:19:27 -- common/autotest_common.sh@10 -- # set +x 00:12:13.236 ************************************ 00:12:13.236 START TEST blockdev_xnvme 00:12:13.236 ************************************ 00:12:13.236 20:19:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:13.236 * Looking for test storage... 00:12:13.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:13.236 20:19:28 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:13.236 20:19:28 -- bdev/nbd_common.sh@6 -- # set -e 00:12:13.236 20:19:28 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:13.236 20:19:28 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:13.236 20:19:28 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:13.236 20:19:28 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:13.236 20:19:28 -- bdev/blockdev.sh@18 -- # : 00:12:13.236 20:19:28 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:13.236 20:19:28 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:13.236 20:19:28 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:13.236 20:19:28 -- bdev/blockdev.sh@672 -- # uname -s 00:12:13.236 20:19:28 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:13.236 20:19:28 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:13.236 20:19:28 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:12:13.236 20:19:28 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:13.236 20:19:28 -- bdev/blockdev.sh@682 -- # dek= 00:12:13.236 20:19:28 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:13.236 20:19:28 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:13.236 20:19:28 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:13.236 20:19:28 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:12:13.236 20:19:28 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:12:13.236 20:19:28 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:13.236 20:19:28 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67453 00:12:13.236 20:19:28 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:13.236 20:19:28 -- bdev/blockdev.sh@47 -- # waitforlisten 67453 00:12:13.236 20:19:28 -- common/autotest_common.sh@819 -- # '[' -z 67453 ']' 00:12:13.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.236 20:19:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.236 20:19:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:13.236 20:19:28 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:13.236 20:19:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.236 20:19:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:13.236 20:19:28 -- common/autotest_common.sh@10 -- # set +x 00:12:13.236 [2024-10-16 20:19:28.136835] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:13.236 [2024-10-16 20:19:28.136925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67453 ] 00:12:13.497 [2024-10-16 20:19:28.268515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.497 [2024-10-16 20:19:28.406581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:13.497 [2024-10-16 20:19:28.406733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.069 20:19:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:14.069 20:19:28 -- common/autotest_common.sh@852 -- # return 0 00:12:14.069 20:19:28 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:14.069 20:19:28 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:12:14.069 20:19:28 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:12:14.069 20:19:28 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:12:14.069 20:19:28 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:14.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:14.641 Waiting for block devices as requested 00:12:14.641 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:14.641 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:14.641 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:14.902 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.194 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:20.194 20:19:34 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:12:20.194 20:19:34 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:12:20.194 20:19:34 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:12:20.194 20:19:34 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:12:20.194 20:19:34 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:12:20.194 20:19:34 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:12:20.194 20:19:34 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:12:20.194 20:19:34 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:12:20.194 20:19:34 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:12:20.194 20:19:34 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:12:20.194 20:19:34 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:12:20.194 20:19:34 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:12:20.194 20:19:34 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:12:20.194 20:19:34 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:12:20.194 20:19:34 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:12:20.194 20:19:34 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:20.194 20:19:34 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.194 20:19:34 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.194 20:19:34 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.194 20:19:34 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.194 20:19:34 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.194 20:19:34 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.194 20:19:34 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:12:20.194 20:19:34 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:12:20.194 20:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.194 20:19:34 -- common/autotest_common.sh@10 -- # set +x 00:12:20.194 20:19:34 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:20.194 nvme0n1 00:12:20.194 nvme1n1 00:12:20.194 nvme1n2 00:12:20.194 nvme1n3 00:12:20.194 nvme2n1 00:12:20.194 nvme3n1 00:12:20.194 20:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:20.194 20:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.194 20:19:34 -- common/autotest_common.sh@10 -- # set +x 00:12:20.194 20:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@738 -- # cat 00:12:20.194 20:19:34 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:20.194 20:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.194 20:19:34 -- common/autotest_common.sh@10 -- # set +x 00:12:20.194 20:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:20.194 20:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.194 20:19:34 -- common/autotest_common.sh@10 -- # set +x 00:12:20.194 20:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:20.194 20:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.194 20:19:34 -- common/autotest_common.sh@10 -- # set +x 00:12:20.194 20:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:20.194 20:19:34 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:20.194 20:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.194 20:19:34 -- common/autotest_common.sh@10 -- # set +x 00:12:20.194 20:19:34 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:20.194 20:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.194 20:19:34 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:20.194 20:19:34 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:20.195 20:19:34 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "89d7f7e2-d8c1-4b38-9b82-4adfc3413da8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "89d7f7e2-d8c1-4b38-9b82-4adfc3413da8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3b3f4579-d53d-42db-8a1c-56cdc8798bd8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3b3f4579-d53d-42db-8a1c-56cdc8798bd8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "681e60b8-c95a-43f0-9f80-5c67d3981dcc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "681e60b8-c95a-43f0-9f80-5c67d3981dcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "3962c1a4-701c-46b8-a641-ee0fc3ad1e04"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3962c1a4-701c-46b8-a641-ee0fc3ad1e04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "51b3e2a6-9bc5-4801-a9b8-aef0d6934fbc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "51b3e2a6-9bc5-4801-a9b8-aef0d6934fbc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a30c9e1e-bdb7-4b1c-bf4b-78408667b9dd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a30c9e1e-bdb7-4b1c-bf4b-78408667b9dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:20.195 20:19:34 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:20.195 20:19:34 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:12:20.195 20:19:34 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:20.195 20:19:34 -- bdev/blockdev.sh@752 -- # killprocess 67453 00:12:20.195 20:19:34 -- common/autotest_common.sh@926 -- # '[' -z 67453 ']' 00:12:20.195 20:19:34 -- common/autotest_common.sh@930 -- # kill -0 67453 00:12:20.195 20:19:34 -- common/autotest_common.sh@931 -- # uname 00:12:20.195 20:19:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:20.195 20:19:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67453 00:12:20.195 20:19:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:20.195 killing process with pid 67453 00:12:20.195 20:19:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:20.195 20:19:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67453' 00:12:20.195 20:19:34 -- common/autotest_common.sh@945 -- # kill 67453 00:12:20.195 20:19:34 -- common/autotest_common.sh@950 -- # wait 67453 00:12:21.137 20:19:36 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:21.137 20:19:36 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:21.137 20:19:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:21.137 20:19:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:21.137 20:19:36 -- common/autotest_common.sh@10 -- # set +x 00:12:21.399 ************************************ 00:12:21.399 START TEST bdev_hello_world 00:12:21.399 ************************************ 00:12:21.399 20:19:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:21.399 [2024-10-16 20:19:36.142297] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:21.399 [2024-10-16 20:19:36.142413] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67828 ] 00:12:21.399 [2024-10-16 20:19:36.289021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.661 [2024-10-16 20:19:36.432831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.922 [2024-10-16 20:19:36.713579] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:21.922 [2024-10-16 20:19:36.713622] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:21.922 [2024-10-16 20:19:36.713634] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:21.922 [2024-10-16 20:19:36.715074] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:21.922 [2024-10-16 20:19:36.715376] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:21.922 [2024-10-16 20:19:36.715397] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:21.922 [2024-10-16 20:19:36.715574] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:21.922 00:12:21.922 [2024-10-16 20:19:36.715598] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:22.492 00:12:22.492 real 0m1.247s 00:12:22.492 user 0m0.982s 00:12:22.492 sys 0m0.154s 00:12:22.492 20:19:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.492 20:19:37 -- common/autotest_common.sh@10 -- # set +x 00:12:22.492 ************************************ 00:12:22.492 END TEST bdev_hello_world 00:12:22.492 ************************************ 00:12:22.492 20:19:37 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:22.492 20:19:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:22.492 20:19:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:22.492 20:19:37 -- common/autotest_common.sh@10 -- # set +x 00:12:22.492 ************************************ 00:12:22.492 START TEST bdev_bounds 00:12:22.492 ************************************ 00:12:22.492 20:19:37 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:12:22.492 Process bdevio pid: 67870 00:12:22.492 20:19:37 -- bdev/blockdev.sh@288 -- # bdevio_pid=67870 00:12:22.492 20:19:37 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:22.492 20:19:37 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67870' 00:12:22.492 20:19:37 -- bdev/blockdev.sh@291 -- # waitforlisten 67870 00:12:22.492 20:19:37 -- common/autotest_common.sh@819 -- # '[' -z 67870 ']' 00:12:22.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.492 20:19:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.492 20:19:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:22.492 20:19:37 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:22.492 20:19:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.492 20:19:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:22.492 20:19:37 -- common/autotest_common.sh@10 -- # set +x 00:12:22.752 [2024-10-16 20:19:37.442813] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:22.752 [2024-10-16 20:19:37.442925] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67870 ] 00:12:22.752 [2024-10-16 20:19:37.591725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:23.013 [2024-10-16 20:19:37.809890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.013 [2024-10-16 20:19:37.810210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.013 [2024-10-16 20:19:37.810289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.585 20:19:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:23.586 20:19:38 -- common/autotest_common.sh@852 -- # return 0 00:12:23.586 20:19:38 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:23.586 I/O targets: 00:12:23.586 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:23.586 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:23.586 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:23.586 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:23.586 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:23.586 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:23.586 00:12:23.586 00:12:23.586 CUnit - A unit testing framework for C - Version 2.1-3 00:12:23.586 http://cunit.sourceforge.net/ 00:12:23.586 00:12:23.586 00:12:23.586 Suite: bdevio tests on: nvme3n1 00:12:23.586 Test: blockdev write read block ...passed 00:12:23.586 Test: blockdev write zeroes read block ...passed 00:12:23.586 Test: blockdev write zeroes read no split ...passed 00:12:23.586 Test: blockdev write zeroes read split ...passed 00:12:23.586 Test: blockdev write zeroes read split partial ...passed 00:12:23.586 Test: blockdev reset ...passed 00:12:23.586 Test: blockdev write read 8 blocks ...passed 00:12:23.586 Test: blockdev write read size > 128k ...passed 00:12:23.586 Test: blockdev write read invalid size ...passed 00:12:23.586 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:23.586 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:23.586 Test: blockdev write read max offset ...passed 00:12:23.586 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:23.586 Test: blockdev writev readv 8 blocks ...passed 00:12:23.586 Test: blockdev writev readv 30 x 1block ...passed 00:12:23.586 Test: blockdev writev readv block ...passed 00:12:23.586 Test: blockdev writev readv size > 128k ...passed 00:12:23.586 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:23.586 Test: blockdev comparev and writev ...passed 00:12:23.586 Test: blockdev nvme passthru rw ...passed 00:12:23.586 Test: blockdev nvme passthru vendor specific ...passed 00:12:23.586 Test: blockdev nvme admin passthru ...passed 00:12:23.586 Test: blockdev copy ...passed 00:12:23.586 Suite: bdevio tests on: nvme2n1 00:12:23.586 Test: blockdev write read block ...passed 00:12:23.586 Test: blockdev write zeroes read block ...passed 00:12:23.586 Test: blockdev write zeroes read no split ...passed 00:12:23.586 Test: blockdev write zeroes read split ...passed 00:12:23.586 Test: blockdev write zeroes read split partial ...passed 00:12:23.586 Test: blockdev reset ...passed 00:12:23.586 Test: blockdev write read 8 blocks ...passed 00:12:23.586 Test: blockdev write read size > 128k ...passed 00:12:23.586 Test: blockdev write read invalid size ...passed 00:12:23.586 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:23.586 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:23.586 Test: blockdev write read max offset ...passed 00:12:23.586 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:23.586 Test: blockdev writev readv 8 blocks ...passed 00:12:23.586 Test: blockdev writev readv 30 x 1block ...passed 00:12:23.586 Test: blockdev writev readv block ...passed 00:12:23.586 Test: blockdev writev readv size > 128k ...passed 00:12:23.586 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:23.586 Test: blockdev comparev and writev ...passed 00:12:23.586 Test: blockdev nvme passthru rw ...passed 00:12:23.586 Test: blockdev nvme passthru vendor specific ...passed 00:12:23.586 Test: blockdev nvme admin passthru ...passed 00:12:23.586 Test: blockdev copy ...passed 00:12:23.586 Suite: bdevio tests on: nvme1n3 00:12:23.586 Test: blockdev write read block ...passed 00:12:23.586 Test: blockdev write zeroes read block ...passed 00:12:23.586 Test: blockdev write zeroes read no split ...passed 00:12:23.847 Test: blockdev write zeroes read split ...passed 00:12:23.847 Test: blockdev write zeroes read split partial ...passed 00:12:23.847 Test: blockdev reset ...passed 00:12:23.847 Test: blockdev write read 8 blocks ...passed 00:12:23.847 Test: blockdev write read size > 128k ...passed 00:12:23.847 Test: blockdev write read invalid size ...passed 00:12:23.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:23.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:23.848 Test: blockdev write read max offset ...passed 00:12:23.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:23.848 Test: blockdev writev readv 8 blocks ...passed 00:12:23.848 Test: blockdev writev readv 30 x 1block ...passed 00:12:23.848 Test: blockdev writev readv block ...passed 00:12:23.848 Test: blockdev writev readv size > 128k ...passed 00:12:23.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:23.848 Test: blockdev comparev and writev ...passed 00:12:23.848 Test: blockdev nvme passthru rw ...passed 00:12:23.848 Test: blockdev nvme passthru vendor specific ...passed 00:12:23.848 Test: blockdev nvme admin passthru ...passed 00:12:23.848 Test: blockdev copy ...passed 00:12:23.848 Suite: bdevio tests on: nvme1n2 00:12:23.848 Test: blockdev write read block ...passed 00:12:23.848 Test: blockdev write zeroes read block ...passed 00:12:23.848 Test: blockdev write zeroes read no split ...passed 00:12:23.848 Test: blockdev write zeroes read split ...passed 00:12:23.848 Test: blockdev write zeroes read split partial ...passed 00:12:23.848 Test: blockdev reset ...passed 00:12:23.848 Test: blockdev write read 8 blocks ...passed 00:12:23.848 Test: blockdev write read size > 128k ...passed 00:12:23.848 Test: blockdev write read invalid size ...passed 00:12:23.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:23.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:23.848 Test: blockdev write read max offset ...passed 00:12:23.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:23.848 Test: blockdev writev readv 8 blocks ...passed 00:12:23.848 Test: blockdev writev readv 30 x 1block ...passed 00:12:23.848 Test: blockdev writev readv block ...passed 00:12:23.848 Test: blockdev writev readv size > 128k ...passed 00:12:23.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:23.848 Test: blockdev comparev and writev ...passed 00:12:23.848 Test: blockdev nvme passthru rw ...passed 00:12:23.848 Test: blockdev nvme passthru vendor specific ...passed 00:12:23.848 Test: blockdev nvme admin passthru ...passed 00:12:23.848 Test: blockdev copy ...passed 00:12:23.848 Suite: bdevio tests on: nvme1n1 00:12:23.848 Test: blockdev write read block ...passed 00:12:23.848 Test: blockdev write zeroes read block ...passed 00:12:23.848 Test: blockdev write zeroes read no split ...passed 00:12:23.848 Test: blockdev write zeroes read split ...passed 00:12:23.848 Test: blockdev write zeroes read split partial ...passed 00:12:23.848 Test: blockdev reset ...passed 00:12:23.848 Test: blockdev write read 8 blocks ...passed 00:12:23.848 Test: blockdev write read size > 128k ...passed 00:12:23.848 Test: blockdev write read invalid size ...passed 00:12:23.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:23.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:23.848 Test: blockdev write read max offset ...passed 00:12:23.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:23.848 Test: blockdev writev readv 8 blocks ...passed 00:12:23.848 Test: blockdev writev readv 30 x 1block ...passed 00:12:23.848 Test: blockdev writev readv block ...passed 00:12:23.848 Test: blockdev writev readv size > 128k ...passed 00:12:23.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:23.848 Test: blockdev comparev and writev ...passed 00:12:23.848 Test: blockdev nvme passthru rw ...passed 00:12:23.848 Test: blockdev nvme passthru vendor specific ...passed 00:12:23.848 Test: blockdev nvme admin passthru ...passed 00:12:23.848 Test: blockdev copy ...passed 00:12:23.848 Suite: bdevio tests on: nvme0n1 00:12:23.848 Test: blockdev write read block ...passed 00:12:23.848 Test: blockdev write zeroes read block ...passed 00:12:23.848 Test: blockdev write zeroes read no split ...passed 00:12:23.848 Test: blockdev write zeroes read split ...passed 00:12:23.848 Test: blockdev write zeroes read split partial ...passed 00:12:23.848 Test: blockdev reset ...passed 00:12:23.848 Test: blockdev write read 8 blocks ...passed 00:12:24.109 Test: blockdev write read size > 128k ...passed 00:12:24.109 Test: blockdev write read invalid size ...passed 00:12:24.109 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.109 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.109 Test: blockdev write read max offset ...passed 00:12:24.109 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.109 Test: blockdev writev readv 8 blocks ...passed 00:12:24.109 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.109 Test: blockdev writev readv block ...passed 00:12:24.109 Test: blockdev writev readv size > 128k ...passed 00:12:24.109 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.109 Test: blockdev comparev and writev ...passed 00:12:24.109 Test: blockdev nvme passthru rw ...passed 00:12:24.109 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.109 Test: blockdev nvme admin passthru ...passed 00:12:24.109 Test: blockdev copy ...passed 00:12:24.109 00:12:24.109 Run Summary: Type Total Ran Passed Failed Inactive 00:12:24.109 suites 6 6 n/a 0 0 00:12:24.109 tests 138 138 138 0 0 00:12:24.109 asserts 780 780 780 0 n/a 00:12:24.109 00:12:24.109 Elapsed time = 1.123 seconds 00:12:24.109 0 00:12:24.109 20:19:38 -- bdev/blockdev.sh@293 -- # killprocess 67870 00:12:24.109 20:19:38 -- common/autotest_common.sh@926 -- # '[' -z 67870 ']' 00:12:24.109 20:19:38 -- common/autotest_common.sh@930 -- # kill -0 67870 00:12:24.109 20:19:38 -- common/autotest_common.sh@931 -- # uname 00:12:24.109 20:19:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:24.109 20:19:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67870 00:12:24.109 20:19:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:24.109 20:19:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:24.109 20:19:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67870' 00:12:24.109 killing process with pid 67870 00:12:24.109 20:19:38 -- common/autotest_common.sh@945 -- # kill 67870 00:12:24.109 20:19:38 -- common/autotest_common.sh@950 -- # wait 67870 00:12:24.681 20:19:39 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:24.681 00:12:24.681 real 0m2.084s 00:12:24.681 user 0m4.805s 00:12:24.681 sys 0m0.316s 00:12:24.681 20:19:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.681 ************************************ 00:12:24.681 20:19:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.681 END TEST bdev_bounds 00:12:24.681 ************************************ 00:12:24.681 20:19:39 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:24.681 20:19:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:24.681 20:19:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.681 20:19:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.681 ************************************ 00:12:24.681 START TEST bdev_nbd 00:12:24.681 ************************************ 00:12:24.681 20:19:39 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:24.681 20:19:39 -- bdev/blockdev.sh@298 -- # uname -s 00:12:24.681 20:19:39 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:24.681 20:19:39 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.681 20:19:39 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:24.681 20:19:39 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:24.681 20:19:39 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:24.681 20:19:39 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:24.681 20:19:39 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:24.681 20:19:39 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:24.681 20:19:39 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:24.681 20:19:39 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:24.681 20:19:39 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:24.681 20:19:39 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:24.681 20:19:39 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:24.681 20:19:39 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:24.681 20:19:39 -- bdev/blockdev.sh@316 -- # nbd_pid=67924 00:12:24.682 20:19:39 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:24.682 20:19:39 -- bdev/blockdev.sh@318 -- # waitforlisten 67924 /var/tmp/spdk-nbd.sock 00:12:24.682 20:19:39 -- common/autotest_common.sh@819 -- # '[' -z 67924 ']' 00:12:24.682 20:19:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:24.682 20:19:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:24.682 20:19:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:24.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:24.682 20:19:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:24.682 20:19:39 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:24.682 20:19:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.682 [2024-10-16 20:19:39.593015] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:24.682 [2024-10-16 20:19:39.593141] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.942 [2024-10-16 20:19:39.744603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.203 [2024-10-16 20:19:39.881852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.776 20:19:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:25.776 20:19:40 -- common/autotest_common.sh@852 -- # return 0 00:12:25.776 20:19:40 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@24 -- # local i 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:25.776 20:19:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:25.776 20:19:40 -- common/autotest_common.sh@857 -- # local i 00:12:25.776 20:19:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:25.776 20:19:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:25.776 20:19:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:25.776 20:19:40 -- common/autotest_common.sh@861 -- # break 00:12:25.776 20:19:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:25.776 20:19:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:25.776 20:19:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.776 1+0 records in 00:12:25.776 1+0 records out 00:12:25.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656436 s, 6.2 MB/s 00:12:25.776 20:19:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.776 20:19:40 -- common/autotest_common.sh@874 -- # size=4096 00:12:25.776 20:19:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.776 20:19:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:25.776 20:19:40 -- common/autotest_common.sh@877 -- # return 0 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:25.776 20:19:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:26.037 20:19:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:26.037 20:19:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:26.037 20:19:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:26.037 20:19:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:26.037 20:19:40 -- common/autotest_common.sh@857 -- # local i 00:12:26.037 20:19:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:26.037 20:19:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:26.037 20:19:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:26.037 20:19:40 -- common/autotest_common.sh@861 -- # break 00:12:26.037 20:19:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:26.037 20:19:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:26.037 20:19:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.037 1+0 records in 00:12:26.037 1+0 records out 00:12:26.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334817 s, 12.2 MB/s 00:12:26.037 20:19:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.037 20:19:40 -- common/autotest_common.sh@874 -- # size=4096 00:12:26.037 20:19:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.037 20:19:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:26.037 20:19:40 -- common/autotest_common.sh@877 -- # return 0 00:12:26.037 20:19:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:26.037 20:19:40 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:26.037 20:19:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:26.298 20:19:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:26.298 20:19:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:26.298 20:19:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:26.298 20:19:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:26.298 20:19:41 -- common/autotest_common.sh@857 -- # local i 00:12:26.298 20:19:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:26.298 20:19:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:26.298 20:19:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:26.298 20:19:41 -- common/autotest_common.sh@861 -- # break 00:12:26.298 20:19:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:26.298 20:19:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:26.298 20:19:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.298 1+0 records in 00:12:26.298 1+0 records out 00:12:26.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545373 s, 7.5 MB/s 00:12:26.298 20:19:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.298 20:19:41 -- common/autotest_common.sh@874 -- # size=4096 00:12:26.298 20:19:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.298 20:19:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:26.298 20:19:41 -- common/autotest_common.sh@877 -- # return 0 00:12:26.298 20:19:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:26.298 20:19:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:26.298 20:19:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:26.559 20:19:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:26.559 20:19:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:26.559 20:19:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:26.559 20:19:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:26.559 20:19:41 -- common/autotest_common.sh@857 -- # local i 00:12:26.559 20:19:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:26.559 20:19:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:26.559 20:19:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:26.559 20:19:41 -- common/autotest_common.sh@861 -- # break 00:12:26.559 20:19:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:26.559 20:19:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:26.559 20:19:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.559 1+0 records in 00:12:26.559 1+0 records out 00:12:26.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417424 s, 9.8 MB/s 00:12:26.559 20:19:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.559 20:19:41 -- common/autotest_common.sh@874 -- # size=4096 00:12:26.559 20:19:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.559 20:19:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:26.559 20:19:41 -- common/autotest_common.sh@877 -- # return 0 00:12:26.559 20:19:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:26.559 20:19:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:26.559 20:19:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:26.820 20:19:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:26.820 20:19:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:26.820 20:19:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:26.820 20:19:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:26.820 20:19:41 -- common/autotest_common.sh@857 -- # local i 00:12:26.820 20:19:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:26.820 20:19:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:26.820 20:19:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:26.821 20:19:41 -- common/autotest_common.sh@861 -- # break 00:12:26.821 20:19:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:26.821 20:19:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:26.821 20:19:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.821 1+0 records in 00:12:26.821 1+0 records out 00:12:26.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571221 s, 7.2 MB/s 00:12:26.821 20:19:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.821 20:19:41 -- common/autotest_common.sh@874 -- # size=4096 00:12:26.821 20:19:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.821 20:19:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:26.821 20:19:41 -- common/autotest_common.sh@877 -- # return 0 00:12:26.821 20:19:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:26.821 20:19:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:26.821 20:19:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:26.821 20:19:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:26.821 20:19:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:26.821 20:19:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:26.821 20:19:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:26.821 20:19:41 -- common/autotest_common.sh@857 -- # local i 00:12:26.821 20:19:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:26.821 20:19:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:26.821 20:19:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:26.821 20:19:41 -- common/autotest_common.sh@861 -- # break 00:12:26.821 20:19:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:26.821 20:19:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:26.821 20:19:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.821 1+0 records in 00:12:26.821 1+0 records out 00:12:26.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248179 s, 16.5 MB/s 00:12:26.821 20:19:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.821 20:19:41 -- common/autotest_common.sh@874 -- # size=4096 00:12:26.821 20:19:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.821 20:19:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:26.821 20:19:41 -- common/autotest_common.sh@877 -- # return 0 00:12:26.821 20:19:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:26.821 20:19:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:26.821 20:19:41 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd0", 00:12:27.082 "bdev_name": "nvme0n1" 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd1", 00:12:27.082 "bdev_name": "nvme1n1" 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd2", 00:12:27.082 "bdev_name": "nvme1n2" 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd3", 00:12:27.082 "bdev_name": "nvme1n3" 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd4", 00:12:27.082 "bdev_name": "nvme2n1" 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd5", 00:12:27.082 "bdev_name": "nvme3n1" 00:12:27.082 } 00:12:27.082 ]' 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd0", 00:12:27.082 "bdev_name": "nvme0n1" 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd1", 00:12:27.082 "bdev_name": "nvme1n1" 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd2", 00:12:27.082 "bdev_name": "nvme1n2" 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd3", 00:12:27.082 "bdev_name": "nvme1n3" 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd4", 00:12:27.082 "bdev_name": "nvme2n1" 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "nbd_device": "/dev/nbd5", 00:12:27.082 "bdev_name": "nvme3n1" 00:12:27.082 } 00:12:27.082 ]' 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@51 -- # local i 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.082 20:19:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:27.343 20:19:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.343 20:19:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.343 20:19:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.343 20:19:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.343 20:19:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.343 20:19:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.343 20:19:42 -- bdev/nbd_common.sh@41 -- # break 00:12:27.343 20:19:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.343 20:19:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.343 20:19:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:27.604 20:19:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:27.604 20:19:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:27.604 20:19:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:27.604 20:19:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.604 20:19:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.604 20:19:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:27.604 20:19:42 -- bdev/nbd_common.sh@41 -- # break 00:12:27.604 20:19:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.604 20:19:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.604 20:19:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@41 -- # break 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@41 -- # break 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.865 20:19:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:28.126 20:19:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:28.126 20:19:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:28.126 20:19:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:28.126 20:19:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.126 20:19:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.126 20:19:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:28.126 20:19:42 -- bdev/nbd_common.sh@41 -- # break 00:12:28.126 20:19:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.126 20:19:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.126 20:19:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@41 -- # break 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:28.387 20:19:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@65 -- # true 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@65 -- # count=0 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@122 -- # count=0 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@127 -- # return 0 00:12:28.649 20:19:43 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@12 -- # local i 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:28.649 /dev/nbd0 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:28.649 20:19:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:28.649 20:19:43 -- common/autotest_common.sh@857 -- # local i 00:12:28.649 20:19:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:28.649 20:19:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:28.649 20:19:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:28.649 20:19:43 -- common/autotest_common.sh@861 -- # break 00:12:28.649 20:19:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:28.649 20:19:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:28.649 20:19:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.649 1+0 records in 00:12:28.649 1+0 records out 00:12:28.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469841 s, 8.7 MB/s 00:12:28.649 20:19:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.649 20:19:43 -- common/autotest_common.sh@874 -- # size=4096 00:12:28.649 20:19:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.649 20:19:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:28.649 20:19:43 -- common/autotest_common.sh@877 -- # return 0 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:28.649 20:19:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:28.911 /dev/nbd1 00:12:28.911 20:19:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:28.911 20:19:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:28.911 20:19:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:28.911 20:19:43 -- common/autotest_common.sh@857 -- # local i 00:12:28.911 20:19:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:28.911 20:19:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:28.911 20:19:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:28.911 20:19:43 -- common/autotest_common.sh@861 -- # break 00:12:28.911 20:19:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:28.911 20:19:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:28.911 20:19:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.911 1+0 records in 00:12:28.911 1+0 records out 00:12:28.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243555 s, 16.8 MB/s 00:12:28.911 20:19:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.911 20:19:43 -- common/autotest_common.sh@874 -- # size=4096 00:12:28.911 20:19:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.911 20:19:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:28.911 20:19:43 -- common/autotest_common.sh@877 -- # return 0 00:12:28.911 20:19:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.911 20:19:43 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:28.911 20:19:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:29.171 /dev/nbd10 00:12:29.171 20:19:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:29.171 20:19:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:29.171 20:19:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:29.171 20:19:43 -- common/autotest_common.sh@857 -- # local i 00:12:29.171 20:19:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:29.171 20:19:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:29.171 20:19:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:29.171 20:19:43 -- common/autotest_common.sh@861 -- # break 00:12:29.171 20:19:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:29.171 20:19:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:29.171 20:19:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.171 1+0 records in 00:12:29.171 1+0 records out 00:12:29.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403881 s, 10.1 MB/s 00:12:29.171 20:19:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.172 20:19:43 -- common/autotest_common.sh@874 -- # size=4096 00:12:29.172 20:19:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.172 20:19:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:29.172 20:19:43 -- common/autotest_common.sh@877 -- # return 0 00:12:29.172 20:19:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.172 20:19:43 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:29.172 20:19:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:29.432 /dev/nbd11 00:12:29.432 20:19:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:29.432 20:19:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:29.432 20:19:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:29.432 20:19:44 -- common/autotest_common.sh@857 -- # local i 00:12:29.432 20:19:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:29.432 20:19:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:29.432 20:19:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:29.432 20:19:44 -- common/autotest_common.sh@861 -- # break 00:12:29.433 20:19:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:29.433 20:19:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:29.433 20:19:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.433 1+0 records in 00:12:29.433 1+0 records out 00:12:29.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492175 s, 8.3 MB/s 00:12:29.433 20:19:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.433 20:19:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:29.433 20:19:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.433 20:19:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:29.433 20:19:44 -- common/autotest_common.sh@877 -- # return 0 00:12:29.433 20:19:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.433 20:19:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:29.433 20:19:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:29.694 /dev/nbd12 00:12:29.694 20:19:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:29.694 20:19:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:29.694 20:19:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:29.694 20:19:44 -- common/autotest_common.sh@857 -- # local i 00:12:29.694 20:19:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:29.694 20:19:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:29.694 20:19:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:29.694 20:19:44 -- common/autotest_common.sh@861 -- # break 00:12:29.694 20:19:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:29.694 20:19:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:29.694 20:19:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.694 1+0 records in 00:12:29.694 1+0 records out 00:12:29.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136989 s, 3.0 MB/s 00:12:29.694 20:19:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.694 20:19:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:29.694 20:19:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.694 20:19:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:29.694 20:19:44 -- common/autotest_common.sh@877 -- # return 0 00:12:29.694 20:19:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.694 20:19:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:29.694 20:19:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:29.694 /dev/nbd13 00:12:29.694 20:19:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:29.955 20:19:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:29.955 20:19:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:29.955 20:19:44 -- common/autotest_common.sh@857 -- # local i 00:12:29.955 20:19:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:29.955 20:19:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:29.955 20:19:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:29.955 20:19:44 -- common/autotest_common.sh@861 -- # break 00:12:29.955 20:19:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:29.955 20:19:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:29.955 20:19:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.955 1+0 records in 00:12:29.955 1+0 records out 00:12:29.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117235 s, 3.5 MB/s 00:12:29.955 20:19:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.955 20:19:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:29.955 20:19:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.955 20:19:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:29.955 20:19:44 -- common/autotest_common.sh@877 -- # return 0 00:12:29.955 20:19:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.955 20:19:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:29.955 20:19:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:29.955 20:19:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.955 20:19:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:29.955 20:19:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:29.955 { 00:12:29.955 "nbd_device": "/dev/nbd0", 00:12:29.955 "bdev_name": "nvme0n1" 00:12:29.955 }, 00:12:29.955 { 00:12:29.955 "nbd_device": "/dev/nbd1", 00:12:29.955 "bdev_name": "nvme1n1" 00:12:29.955 }, 00:12:29.955 { 00:12:29.955 "nbd_device": "/dev/nbd10", 00:12:29.955 "bdev_name": "nvme1n2" 00:12:29.955 }, 00:12:29.955 { 00:12:29.955 "nbd_device": "/dev/nbd11", 00:12:29.955 "bdev_name": "nvme1n3" 00:12:29.955 }, 00:12:29.955 { 00:12:29.955 "nbd_device": "/dev/nbd12", 00:12:29.955 "bdev_name": "nvme2n1" 00:12:29.955 }, 00:12:29.955 { 00:12:29.955 "nbd_device": "/dev/nbd13", 00:12:29.955 "bdev_name": "nvme3n1" 00:12:29.955 } 00:12:29.955 ]' 00:12:29.955 20:19:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:29.955 20:19:44 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:29.955 { 00:12:29.955 "nbd_device": "/dev/nbd0", 00:12:29.955 "bdev_name": "nvme0n1" 00:12:29.956 }, 00:12:29.956 { 00:12:29.956 "nbd_device": "/dev/nbd1", 00:12:29.956 "bdev_name": "nvme1n1" 00:12:29.956 }, 00:12:29.956 { 00:12:29.956 "nbd_device": "/dev/nbd10", 00:12:29.956 "bdev_name": "nvme1n2" 00:12:29.956 }, 00:12:29.956 { 00:12:29.956 "nbd_device": "/dev/nbd11", 00:12:29.956 "bdev_name": "nvme1n3" 00:12:29.956 }, 00:12:29.956 { 00:12:29.956 "nbd_device": "/dev/nbd12", 00:12:29.956 "bdev_name": "nvme2n1" 00:12:29.956 }, 00:12:29.956 { 00:12:29.956 "nbd_device": "/dev/nbd13", 00:12:29.956 "bdev_name": "nvme3n1" 00:12:29.956 } 00:12:29.956 ]' 00:12:29.956 20:19:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:29.956 /dev/nbd1 00:12:29.956 /dev/nbd10 00:12:29.956 /dev/nbd11 00:12:29.956 /dev/nbd12 00:12:29.956 /dev/nbd13' 00:12:29.956 20:19:44 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:29.956 /dev/nbd1 00:12:29.956 /dev/nbd10 00:12:29.956 /dev/nbd11 00:12:29.956 /dev/nbd12 00:12:29.956 /dev/nbd13' 00:12:29.956 20:19:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:29.956 20:19:44 -- bdev/nbd_common.sh@65 -- # count=6 00:12:29.956 20:19:44 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@95 -- # count=6 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:30.217 256+0 records in 00:12:30.217 256+0 records out 00:12:30.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010088 s, 104 MB/s 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.217 20:19:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:30.217 256+0 records in 00:12:30.217 256+0 records out 00:12:30.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228367 s, 4.6 MB/s 00:12:30.217 20:19:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.217 20:19:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:30.478 256+0 records in 00:12:30.478 256+0 records out 00:12:30.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240986 s, 4.4 MB/s 00:12:30.478 20:19:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.478 20:19:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:30.739 256+0 records in 00:12:30.739 256+0 records out 00:12:30.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239337 s, 4.4 MB/s 00:12:30.739 20:19:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.739 20:19:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:31.000 256+0 records in 00:12:31.000 256+0 records out 00:12:31.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225083 s, 4.7 MB/s 00:12:31.000 20:19:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.000 20:19:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:31.261 256+0 records in 00:12:31.261 256+0 records out 00:12:31.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.279791 s, 3.7 MB/s 00:12:31.261 20:19:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.261 20:19:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:31.522 256+0 records in 00:12:31.522 256+0 records out 00:12:31.522 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231209 s, 4.5 MB/s 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@51 -- # local i 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.522 20:19:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:31.783 20:19:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:31.783 20:19:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:31.783 20:19:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:31.783 20:19:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.783 20:19:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.783 20:19:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:31.783 20:19:46 -- bdev/nbd_common.sh@41 -- # break 00:12:31.783 20:19:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.783 20:19:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.783 20:19:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:32.044 20:19:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:32.044 20:19:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:32.044 20:19:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:32.044 20:19:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.044 20:19:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.044 20:19:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:32.044 20:19:46 -- bdev/nbd_common.sh@41 -- # break 00:12:32.044 20:19:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.044 20:19:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.044 20:19:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@41 -- # break 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@41 -- # break 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.306 20:19:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:32.565 20:19:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:32.565 20:19:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:32.565 20:19:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:32.565 20:19:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.565 20:19:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.565 20:19:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:32.565 20:19:47 -- bdev/nbd_common.sh@41 -- # break 00:12:32.565 20:19:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.565 20:19:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.565 20:19:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@41 -- # break 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.825 20:19:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:33.085 20:19:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:33.085 20:19:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:33.085 20:19:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:33.085 20:19:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:33.085 20:19:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:33.085 20:19:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:33.085 20:19:47 -- bdev/nbd_common.sh@65 -- # true 00:12:33.085 20:19:47 -- bdev/nbd_common.sh@65 -- # count=0 00:12:33.085 20:19:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:33.085 20:19:47 -- bdev/nbd_common.sh@104 -- # count=0 00:12:33.086 20:19:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:33.086 20:19:47 -- bdev/nbd_common.sh@109 -- # return 0 00:12:33.086 20:19:47 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:33.086 20:19:47 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.086 20:19:47 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:33.086 20:19:47 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:33.086 20:19:47 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:33.086 20:19:47 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:33.086 malloc_lvol_verify 00:12:33.346 20:19:48 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:33.346 e06498cf-37f4-4d1d-bae4-b6b4be448144 00:12:33.346 20:19:48 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:33.607 67ff9800-d737-442f-af6a-24801a611b81 00:12:33.607 20:19:48 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:33.869 /dev/nbd0 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:33.869 mke2fs 1.47.0 (5-Feb-2023) 00:12:33.869 Discarding device blocks: 0/4096 done 00:12:33.869 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:33.869 00:12:33.869 Allocating group tables: 0/1 done 00:12:33.869 Writing inode tables: 0/1 done 00:12:33.869 Creating journal (1024 blocks): done 00:12:33.869 Writing superblocks and filesystem accounting information: 0/1 done 00:12:33.869 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@51 -- # local i 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@41 -- # break 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:33.869 20:19:48 -- bdev/nbd_common.sh@147 -- # return 0 00:12:33.869 20:19:48 -- bdev/blockdev.sh@324 -- # killprocess 67924 00:12:33.869 20:19:48 -- common/autotest_common.sh@926 -- # '[' -z 67924 ']' 00:12:33.869 20:19:48 -- common/autotest_common.sh@930 -- # kill -0 67924 00:12:33.869 20:19:48 -- common/autotest_common.sh@931 -- # uname 00:12:33.869 20:19:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:33.869 20:19:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67924 00:12:34.130 killing process with pid 67924 00:12:34.130 20:19:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:34.130 20:19:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:34.130 20:19:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67924' 00:12:34.130 20:19:48 -- common/autotest_common.sh@945 -- # kill 67924 00:12:34.130 20:19:48 -- common/autotest_common.sh@950 -- # wait 67924 00:12:34.702 ************************************ 00:12:34.702 END TEST bdev_nbd 00:12:34.702 ************************************ 00:12:34.703 20:19:49 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:34.703 00:12:34.703 real 0m9.939s 00:12:34.703 user 0m13.356s 00:12:34.703 sys 0m3.332s 00:12:34.703 20:19:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.703 20:19:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.703 20:19:49 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:34.703 20:19:49 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:34.703 20:19:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:34.703 20:19:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:34.703 20:19:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.703 ************************************ 00:12:34.703 START TEST bdev_fio 00:12:34.703 ************************************ 00:12:34.703 20:19:49 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@329 -- # local env_context 00:12:34.703 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:34.703 20:19:49 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:34.703 20:19:49 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:34.703 20:19:49 -- bdev/blockdev.sh@337 -- # echo '' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:34.703 20:19:49 -- bdev/blockdev.sh@337 -- # env_context= 00:12:34.703 20:19:49 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:34.703 20:19:49 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:34.703 20:19:49 -- common/autotest_common.sh@1260 -- # local workload=verify 00:12:34.703 20:19:49 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:12:34.703 20:19:49 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:34.703 20:19:49 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:34.703 20:19:49 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:34.703 20:19:49 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:12:34.703 20:19:49 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:34.703 20:19:49 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:34.703 20:19:49 -- common/autotest_common.sh@1280 -- # cat 00:12:34.703 20:19:49 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:12:34.703 20:19:49 -- common/autotest_common.sh@1293 -- # cat 00:12:34.703 20:19:49 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:12:34.703 20:19:49 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:12:34.703 20:19:49 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:34.703 20:19:49 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:12:34.703 20:19:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:34.703 20:19:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:34.703 20:19:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:34.703 20:19:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:34.703 20:19:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:34.703 20:19:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:34.703 20:19:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:34.703 20:19:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:34.703 20:19:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:34.703 20:19:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:34.703 20:19:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:34.703 20:19:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:34.703 20:19:49 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:34.703 20:19:49 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:34.703 20:19:49 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:34.703 20:19:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:34.703 20:19:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.703 ************************************ 00:12:34.703 START TEST bdev_fio_rw_verify 00:12:34.703 ************************************ 00:12:34.703 20:19:49 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:34.703 20:19:49 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:34.703 20:19:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:34.703 20:19:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:34.703 20:19:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:34.703 20:19:49 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:34.703 20:19:49 -- common/autotest_common.sh@1320 -- # shift 00:12:34.703 20:19:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:34.703 20:19:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:34.704 20:19:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:34.704 20:19:49 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:34.704 20:19:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:34.704 20:19:49 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:34.704 20:19:49 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:34.704 20:19:49 -- common/autotest_common.sh@1326 -- # break 00:12:34.704 20:19:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:34.704 20:19:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:34.964 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.964 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.964 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.964 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.964 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.964 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.964 fio-3.35 00:12:34.964 Starting 6 threads 00:12:47.302 00:12:47.302 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68317: Wed Oct 16 20:20:00 2024 00:12:47.302 read: IOPS=16.8k, BW=65.8MiB/s (69.0MB/s)(658MiB/10003msec) 00:12:47.302 slat (usec): min=2, max=2272, avg= 5.90, stdev=12.34 00:12:47.302 clat (usec): min=77, max=557424, avg=1147.37, stdev=3914.18 00:12:47.303 lat (usec): min=80, max=557431, avg=1153.28, stdev=3914.35 00:12:47.303 clat percentiles (usec): 00:12:47.303 | 50.000th=[ 938], 99.000th=[ 3589], 99.900th=[ 4883], 00:12:47.303 | 99.990th=[ 7898], 99.999th=[557843] 00:12:47.303 write: IOPS=17.2k, BW=67.3MiB/s (70.6MB/s)(673MiB/10003msec); 0 zone resets 00:12:47.303 slat (usec): min=5, max=4017, avg=36.92, stdev=133.80 00:12:47.303 clat (usec): min=80, max=8694, avg=1364.78, stdev=912.99 00:12:47.303 lat (usec): min=97, max=8726, avg=1401.71, stdev=928.34 00:12:47.303 clat percentiles (usec): 00:12:47.303 | 50.000th=[ 1172], 99.000th=[ 4228], 99.900th=[ 5669], 99.990th=[ 6915], 00:12:47.303 | 99.999th=[ 8717] 00:12:47.303 bw ( KiB/s): min=47102, max=153320, per=100.00%, avg=70626.32, stdev=5046.83, samples=113 00:12:47.303 iops : min=11774, max=38330, avg=17655.94, stdev=1261.75, samples=113 00:12:47.303 lat (usec) : 100=0.03%, 250=5.35%, 500=15.42%, 750=15.48%, 1000=11.39% 00:12:47.303 lat (msec) : 2=34.87%, 4=16.49%, 10=0.97%, 750=0.01% 00:12:47.303 cpu : usr=46.10%, sys=30.75%, ctx=5973, majf=0, minf=18159 00:12:47.303 IO depths : 1=11.5%, 2=24.0%, 4=51.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:47.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.303 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.303 issued rwts: total=168489,172363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.303 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:47.303 00:12:47.303 Run status group 0 (all jobs): 00:12:47.303 READ: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=658MiB (690MB), run=10003-10003msec 00:12:47.303 WRITE: bw=67.3MiB/s (70.6MB/s), 67.3MiB/s-67.3MiB/s (70.6MB/s-70.6MB/s), io=673MiB (706MB), run=10003-10003msec 00:12:47.303 ----------------------------------------------------- 00:12:47.303 Suppressions used: 00:12:47.303 count bytes template 00:12:47.303 6 48 /usr/src/fio/parse.c 00:12:47.303 3776 362496 /usr/src/fio/iolog.c 00:12:47.303 1 8 libtcmalloc_minimal.so 00:12:47.303 1 904 libcrypto.so 00:12:47.303 ----------------------------------------------------- 00:12:47.303 00:12:47.303 00:12:47.303 real 0m12.037s 00:12:47.303 user 0m29.316s 00:12:47.303 sys 0m18.853s 00:12:47.303 ************************************ 00:12:47.303 END TEST bdev_fio_rw_verify 00:12:47.303 ************************************ 00:12:47.303 20:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.303 20:20:01 -- common/autotest_common.sh@10 -- # set +x 00:12:47.303 20:20:01 -- bdev/blockdev.sh@348 -- # rm -f 00:12:47.303 20:20:01 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:47.303 20:20:01 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:47.303 20:20:01 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:47.303 20:20:01 -- common/autotest_common.sh@1260 -- # local workload=trim 00:12:47.303 20:20:01 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:12:47.303 20:20:01 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:47.303 20:20:01 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:47.303 20:20:01 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:47.303 20:20:01 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:12:47.303 20:20:01 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:47.303 20:20:01 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:47.303 20:20:01 -- common/autotest_common.sh@1280 -- # cat 00:12:47.303 20:20:01 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:12:47.303 20:20:01 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:12:47.303 20:20:01 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:12:47.303 20:20:01 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "89d7f7e2-d8c1-4b38-9b82-4adfc3413da8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "89d7f7e2-d8c1-4b38-9b82-4adfc3413da8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3b3f4579-d53d-42db-8a1c-56cdc8798bd8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3b3f4579-d53d-42db-8a1c-56cdc8798bd8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "681e60b8-c95a-43f0-9f80-5c67d3981dcc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "681e60b8-c95a-43f0-9f80-5c67d3981dcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "3962c1a4-701c-46b8-a641-ee0fc3ad1e04"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3962c1a4-701c-46b8-a641-ee0fc3ad1e04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "51b3e2a6-9bc5-4801-a9b8-aef0d6934fbc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "51b3e2a6-9bc5-4801-a9b8-aef0d6934fbc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a30c9e1e-bdb7-4b1c-bf4b-78408667b9dd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a30c9e1e-bdb7-4b1c-bf4b-78408667b9dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:47.303 20:20:01 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:47.303 20:20:01 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:47.303 20:20:01 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:47.303 /home/vagrant/spdk_repo/spdk 00:12:47.303 20:20:01 -- bdev/blockdev.sh@360 -- # popd 00:12:47.303 20:20:01 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:47.303 20:20:01 -- bdev/blockdev.sh@362 -- # return 0 00:12:47.303 00:12:47.303 real 0m12.204s 00:12:47.303 user 0m29.380s 00:12:47.303 sys 0m18.934s 00:12:47.303 20:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.303 20:20:01 -- common/autotest_common.sh@10 -- # set +x 00:12:47.303 ************************************ 00:12:47.303 END TEST bdev_fio 00:12:47.303 ************************************ 00:12:47.303 20:20:01 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:47.303 20:20:01 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:47.303 20:20:01 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:47.303 20:20:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:47.303 20:20:01 -- common/autotest_common.sh@10 -- # set +x 00:12:47.303 ************************************ 00:12:47.303 START TEST bdev_verify 00:12:47.303 ************************************ 00:12:47.303 20:20:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:47.303 [2024-10-16 20:20:01.860191] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:47.303 [2024-10-16 20:20:01.860328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68492 ] 00:12:47.303 [2024-10-16 20:20:02.017178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:47.564 [2024-10-16 20:20:02.241581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.564 [2024-10-16 20:20:02.241668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.826 Running I/O for 5 seconds... 00:12:53.120 00:12:53.120 Latency(us) 00:12:53.120 [2024-10-16T20:20:08.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0x0 length 0x20000 00:12:53.120 nvme0n1 : 5.08 2118.63 8.28 0.00 0.00 60158.97 7561.85 78239.90 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0x20000 length 0x20000 00:12:53.120 nvme0n1 : 5.06 2228.33 8.70 0.00 0.00 57301.63 4058.19 72593.72 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0x0 length 0x80000 00:12:53.120 nvme1n1 : 5.09 2079.32 8.12 0.00 0.00 61064.44 3428.04 87515.77 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0x80000 length 0x80000 00:12:53.120 nvme1n1 : 5.10 2004.61 7.83 0.00 0.00 63353.91 4663.14 89532.26 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0x0 length 0x80000 00:12:53.120 nvme1n2 : 5.09 1991.48 7.78 0.00 0.00 63684.73 19559.98 81466.29 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0x80000 length 0x80000 00:12:53.120 nvme1n2 : 5.08 2098.84 8.20 0.00 0.00 60625.15 13208.02 78239.90 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0x0 length 0x80000 00:12:53.120 nvme1n3 : 5.10 1994.58 7.79 0.00 0.00 63536.26 6805.66 83079.48 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0x80000 length 0x80000 00:12:53.120 nvme1n3 : 5.08 2039.88 7.97 0.00 0.00 62239.55 17946.78 76626.71 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0x0 length 0xbd0bd 00:12:53.120 nvme2n1 : 5.10 1943.11 7.59 0.00 0.00 65175.90 10334.52 78643.20 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:53.120 nvme2n1 : 5.09 1958.61 7.65 0.00 0.00 64713.52 5721.80 87515.77 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0x0 length 0xa0000 00:12:53.120 nvme3n1 : 5.10 2227.01 8.70 0.00 0.00 56827.13 5142.06 77030.01 00:12:53.120 [2024-10-16T20:20:08.049Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.120 Verification LBA range: start 0xa0000 length 0xa0000 00:12:53.120 nvme3n1 : 5.10 2170.39 8.48 0.00 0.00 58350.00 8620.50 75013.51 00:12:53.120 [2024-10-16T20:20:08.049Z] =================================================================================================================== 00:12:53.120 [2024-10-16T20:20:08.049Z] Total : 24854.78 97.09 0.00 0.00 61295.92 3428.04 89532.26 00:12:54.061 00:12:54.061 real 0m6.903s 00:12:54.061 user 0m8.721s 00:12:54.061 sys 0m3.104s 00:12:54.061 20:20:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.061 ************************************ 00:12:54.061 END TEST bdev_verify 00:12:54.061 ************************************ 00:12:54.061 20:20:08 -- common/autotest_common.sh@10 -- # set +x 00:12:54.061 20:20:08 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:54.061 20:20:08 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:54.061 20:20:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:54.061 20:20:08 -- common/autotest_common.sh@10 -- # set +x 00:12:54.061 ************************************ 00:12:54.061 START TEST bdev_verify_big_io 00:12:54.061 ************************************ 00:12:54.061 20:20:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:54.061 [2024-10-16 20:20:08.826785] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:54.061 [2024-10-16 20:20:08.826923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68587 ] 00:12:54.061 [2024-10-16 20:20:08.979461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:54.322 [2024-10-16 20:20:09.199315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.322 [2024-10-16 20:20:09.199422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.894 Running I/O for 5 seconds... 00:13:01.483 00:13:01.483 Latency(us) 00:13:01.483 [2024-10-16T20:20:16.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.483 [2024-10-16T20:20:16.412Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:01.483 Verification LBA range: start 0x0 length 0x2000 00:13:01.483 nvme0n1 : 5.54 251.12 15.69 0.00 0.00 494678.76 64527.75 551712.30 00:13:01.483 [2024-10-16T20:20:16.412Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:01.483 Verification LBA range: start 0x2000 length 0x2000 00:13:01.483 nvme0n1 : 5.51 252.23 15.76 0.00 0.00 488377.04 55251.89 509769.26 00:13:01.483 [2024-10-16T20:20:16.412Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:01.483 Verification LBA range: start 0x0 length 0x8000 00:13:01.483 nvme1n1 : 5.54 250.87 15.68 0.00 0.00 482137.66 149220.43 532353.97 00:13:01.483 [2024-10-16T20:20:16.412Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:01.483 Verification LBA range: start 0x8000 length 0x8000 00:13:01.483 nvme1n1 : 5.54 299.63 18.73 0.00 0.00 407635.32 107277.39 490410.93 00:13:01.483 [2024-10-16T20:20:16.412Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:01.483 Verification LBA range: start 0x0 length 0x8000 00:13:01.483 nvme1n2 : 5.55 234.81 14.68 0.00 0.00 506788.93 116956.55 577523.40 00:13:01.483 [2024-10-16T20:20:16.412Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:01.483 Verification LBA range: start 0x8000 length 0x8000 00:13:01.483 nvme1n2 : 5.54 251.08 15.69 0.00 0.00 480216.26 23895.43 506542.87 00:13:01.483 [2024-10-16T20:20:16.412Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:01.483 Verification LBA range: start 0x0 length 0x8000 00:13:01.483 nvme1n3 : 5.56 250.37 15.65 0.00 0.00 477446.88 39119.95 525901.19 00:13:01.483 [2024-10-16T20:20:16.412Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:01.483 Verification LBA range: start 0x8000 length 0x8000 00:13:01.484 nvme1n3 : 5.54 235.18 14.70 0.00 0.00 504898.75 78643.20 535580.36 00:13:01.484 [2024-10-16T20:20:16.413Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:01.484 Verification LBA range: start 0x0 length 0xbd0b 00:13:01.484 nvme2n1 : 5.56 314.19 19.64 0.00 0.00 376259.11 9880.81 535580.36 00:13:01.484 [2024-10-16T20:20:16.413Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:01.484 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:01.484 nvme2n1 : 5.56 357.18 22.32 0.00 0.00 331747.26 7259.37 506542.87 00:13:01.484 [2024-10-16T20:20:16.413Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:01.484 Verification LBA range: start 0x0 length 0xa000 00:13:01.484 nvme3n1 : 5.56 281.94 17.62 0.00 0.00 412962.38 20669.05 587202.56 00:13:01.484 [2024-10-16T20:20:16.413Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:01.484 Verification LBA range: start 0xa000 length 0xa000 00:13:01.484 nvme3n1 : 5.55 282.15 17.63 0.00 0.00 413933.17 18450.90 535580.36 00:13:01.484 [2024-10-16T20:20:16.413Z] =================================================================================================================== 00:13:01.484 [2024-10-16T20:20:16.413Z] Total : 3260.74 203.80 0.00 0.00 441019.41 7259.37 587202.56 00:13:01.484 00:13:01.484 real 0m7.606s 00:13:01.484 user 0m13.463s 00:13:01.484 sys 0m0.632s 00:13:01.484 20:20:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.484 ************************************ 00:13:01.484 END TEST bdev_verify_big_io 00:13:01.484 ************************************ 00:13:01.484 20:20:16 -- common/autotest_common.sh@10 -- # set +x 00:13:01.746 20:20:16 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.746 20:20:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:01.746 20:20:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:01.746 20:20:16 -- common/autotest_common.sh@10 -- # set +x 00:13:01.746 ************************************ 00:13:01.746 START TEST bdev_write_zeroes 00:13:01.746 ************************************ 00:13:01.746 20:20:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.746 [2024-10-16 20:20:16.522148] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:01.746 [2024-10-16 20:20:16.522292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68697 ] 00:13:02.007 [2024-10-16 20:20:16.680156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.007 [2024-10-16 20:20:16.911548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.577 Running I/O for 1 seconds... 00:13:03.524 00:13:03.524 Latency(us) 00:13:03.524 [2024-10-16T20:20:18.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.524 [2024-10-16T20:20:18.453Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:03.524 nvme0n1 : 1.01 11891.35 46.45 0.00 0.00 10753.09 8519.68 19761.62 00:13:03.524 [2024-10-16T20:20:18.453Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:03.524 nvme1n1 : 1.01 11877.16 46.40 0.00 0.00 10757.23 8570.09 19559.98 00:13:03.524 [2024-10-16T20:20:18.453Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:03.524 nvme1n2 : 1.01 11863.50 46.34 0.00 0.00 10760.11 8570.09 19459.15 00:13:03.524 [2024-10-16T20:20:18.453Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:03.524 nvme1n3 : 1.02 11850.09 46.29 0.00 0.00 10764.07 8570.09 19358.33 00:13:03.524 [2024-10-16T20:20:18.453Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:03.524 nvme2n1 : 1.01 12762.06 49.85 0.00 0.00 9983.17 5167.26 16736.89 00:13:03.524 [2024-10-16T20:20:18.453Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:03.524 nvme3n1 : 1.02 11835.95 46.23 0.00 0.00 10755.54 9023.80 22786.36 00:13:03.524 [2024-10-16T20:20:18.453Z] =================================================================================================================== 00:13:03.524 [2024-10-16T20:20:18.453Z] Total : 72080.11 281.56 0.00 0.00 10621.23 5167.26 22786.36 00:13:04.470 00:13:04.470 real 0m2.801s 00:13:04.470 user 0m2.114s 00:13:04.470 sys 0m0.505s 00:13:04.470 20:20:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.470 ************************************ 00:13:04.470 END TEST bdev_write_zeroes 00:13:04.470 ************************************ 00:13:04.470 20:20:19 -- common/autotest_common.sh@10 -- # set +x 00:13:04.470 20:20:19 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.470 20:20:19 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:04.470 20:20:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:04.470 20:20:19 -- common/autotest_common.sh@10 -- # set +x 00:13:04.470 ************************************ 00:13:04.470 START TEST bdev_json_nonenclosed 00:13:04.470 ************************************ 00:13:04.470 20:20:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.470 [2024-10-16 20:20:19.394652] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:04.470 [2024-10-16 20:20:19.394789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68750 ] 00:13:04.731 [2024-10-16 20:20:19.545014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.992 [2024-10-16 20:20:19.763563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.992 [2024-10-16 20:20:19.763759] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:04.992 [2024-10-16 20:20:19.763787] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:05.253 00:13:05.253 real 0m0.753s 00:13:05.253 user 0m0.528s 00:13:05.253 sys 0m0.117s 00:13:05.253 ************************************ 00:13:05.253 END TEST bdev_json_nonenclosed 00:13:05.253 ************************************ 00:13:05.253 20:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.253 20:20:20 -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 20:20:20 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:05.253 20:20:20 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:05.253 20:20:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:05.253 20:20:20 -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 ************************************ 00:13:05.253 START TEST bdev_json_nonarray 00:13:05.253 ************************************ 00:13:05.253 20:20:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:05.514 [2024-10-16 20:20:20.209358] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:05.514 [2024-10-16 20:20:20.209502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68781 ] 00:13:05.514 [2024-10-16 20:20:20.363530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.775 [2024-10-16 20:20:20.582323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.775 [2024-10-16 20:20:20.582548] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:05.775 [2024-10-16 20:20:20.582568] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:06.037 00:13:06.037 real 0m0.751s 00:13:06.037 user 0m0.525s 00:13:06.037 sys 0m0.118s 00:13:06.037 ************************************ 00:13:06.037 END TEST bdev_json_nonarray 00:13:06.037 ************************************ 00:13:06.037 20:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.037 20:20:20 -- common/autotest_common.sh@10 -- # set +x 00:13:06.037 20:20:20 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:06.037 20:20:20 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:06.037 20:20:20 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:06.037 20:20:20 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:06.037 20:20:20 -- bdev/blockdev.sh@809 -- # cleanup 00:13:06.037 20:20:20 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:06.037 20:20:20 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:06.037 20:20:20 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:06.037 20:20:20 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:06.037 20:20:20 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:06.037 20:20:20 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:06.037 20:20:20 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:06.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:10.283 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.283 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.545 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.545 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.545 00:13:10.545 real 0m57.402s 00:13:10.545 user 1m22.802s 00:13:10.545 sys 0m37.075s 00:13:10.545 ************************************ 00:13:10.545 END TEST blockdev_xnvme 00:13:10.545 ************************************ 00:13:10.545 20:20:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.545 20:20:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.545 20:20:25 -- spdk/autotest.sh@259 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:10.545 20:20:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:10.545 20:20:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:10.545 20:20:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.806 ************************************ 00:13:10.806 START TEST ublk 00:13:10.806 ************************************ 00:13:10.806 20:20:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:10.806 * Looking for test storage... 00:13:10.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:10.806 20:20:25 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:10.806 20:20:25 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:10.806 20:20:25 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:10.806 20:20:25 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:10.806 20:20:25 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:10.806 20:20:25 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:10.806 20:20:25 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:10.806 20:20:25 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:10.806 20:20:25 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:10.806 20:20:25 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:10.806 20:20:25 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:10.806 20:20:25 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:10.806 20:20:25 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:10.806 20:20:25 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:10.806 20:20:25 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:10.806 20:20:25 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:10.806 20:20:25 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:10.806 20:20:25 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:10.806 20:20:25 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:10.806 20:20:25 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:10.806 20:20:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:10.806 20:20:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:10.806 20:20:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.806 ************************************ 00:13:10.806 START TEST test_save_ublk_config 00:13:10.806 ************************************ 00:13:10.806 20:20:25 -- common/autotest_common.sh@1104 -- # test_save_config 00:13:10.806 20:20:25 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:10.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.806 20:20:25 -- ublk/ublk.sh@103 -- # tgtpid=69089 00:13:10.806 20:20:25 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:10.806 20:20:25 -- ublk/ublk.sh@106 -- # waitforlisten 69089 00:13:10.806 20:20:25 -- common/autotest_common.sh@819 -- # '[' -z 69089 ']' 00:13:10.806 20:20:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.806 20:20:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:10.806 20:20:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.806 20:20:25 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:10.806 20:20:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:10.806 20:20:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.806 [2024-10-16 20:20:25.669526] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:10.806 [2024-10-16 20:20:25.669670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69089 ] 00:13:11.067 [2024-10-16 20:20:25.827381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.328 [2024-10-16 20:20:26.057844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:11.328 [2024-10-16 20:20:26.058102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.270 20:20:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:12.270 20:20:27 -- common/autotest_common.sh@852 -- # return 0 00:13:12.270 20:20:27 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:12.270 20:20:27 -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:12.270 20:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.270 20:20:27 -- common/autotest_common.sh@10 -- # set +x 00:13:12.270 [2024-10-16 20:20:27.191905] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:12.537 malloc0 00:13:12.537 [2024-10-16 20:20:27.263203] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:12.537 [2024-10-16 20:20:27.263303] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:12.537 [2024-10-16 20:20:27.263311] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:12.537 [2024-10-16 20:20:27.263321] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:12.537 [2024-10-16 20:20:27.279077] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:12.537 [2024-10-16 20:20:27.279107] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:12.537 [2024-10-16 20:20:27.287085] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:12.537 [2024-10-16 20:20:27.287213] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:12.537 [2024-10-16 20:20:27.304074] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:12.537 0 00:13:12.537 20:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.537 20:20:27 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:12.537 20:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.537 20:20:27 -- common/autotest_common.sh@10 -- # set +x 00:13:12.797 20:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.797 20:20:27 -- ublk/ublk.sh@115 -- # config='{ 00:13:12.797 "subsystems": [ 00:13:12.797 { 00:13:12.797 "subsystem": "iobuf", 00:13:12.797 "config": [ 00:13:12.797 { 00:13:12.797 "method": "iobuf_set_options", 00:13:12.797 "params": { 00:13:12.797 "small_pool_count": 8192, 00:13:12.797 "large_pool_count": 1024, 00:13:12.797 "small_bufsize": 8192, 00:13:12.797 "large_bufsize": 135168 00:13:12.797 } 00:13:12.797 } 00:13:12.797 ] 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "subsystem": "sock", 00:13:12.797 "config": [ 00:13:12.797 { 00:13:12.797 "method": "sock_impl_set_options", 00:13:12.797 "params": { 00:13:12.797 "impl_name": "posix", 00:13:12.797 "recv_buf_size": 2097152, 00:13:12.797 "send_buf_size": 2097152, 00:13:12.797 "enable_recv_pipe": true, 00:13:12.797 "enable_quickack": false, 00:13:12.797 "enable_placement_id": 0, 00:13:12.797 "enable_zerocopy_send_server": true, 00:13:12.797 "enable_zerocopy_send_client": false, 00:13:12.797 "zerocopy_threshold": 0, 00:13:12.797 "tls_version": 0, 00:13:12.797 "enable_ktls": false 00:13:12.797 } 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "method": "sock_impl_set_options", 00:13:12.797 "params": { 00:13:12.797 "impl_name": "ssl", 00:13:12.797 "recv_buf_size": 4096, 00:13:12.797 "send_buf_size": 4096, 00:13:12.797 "enable_recv_pipe": true, 00:13:12.797 "enable_quickack": false, 00:13:12.797 "enable_placement_id": 0, 00:13:12.797 "enable_zerocopy_send_server": true, 00:13:12.797 "enable_zerocopy_send_client": false, 00:13:12.797 "zerocopy_threshold": 0, 00:13:12.797 "tls_version": 0, 00:13:12.797 "enable_ktls": false 00:13:12.797 } 00:13:12.797 } 00:13:12.797 ] 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "subsystem": "vmd", 00:13:12.797 "config": [] 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "subsystem": "accel", 00:13:12.797 "config": [ 00:13:12.797 { 00:13:12.797 "method": "accel_set_options", 00:13:12.797 "params": { 00:13:12.797 "small_cache_size": 128, 00:13:12.797 "large_cache_size": 16, 00:13:12.797 "task_count": 2048, 00:13:12.797 "sequence_count": 2048, 00:13:12.797 "buf_count": 2048 00:13:12.797 } 00:13:12.797 } 00:13:12.797 ] 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "subsystem": "bdev", 00:13:12.797 "config": [ 00:13:12.797 { 00:13:12.797 "method": "bdev_set_options", 00:13:12.797 "params": { 00:13:12.797 "bdev_io_pool_size": 65535, 00:13:12.797 "bdev_io_cache_size": 256, 00:13:12.797 "bdev_auto_examine": true, 00:13:12.797 "iobuf_small_cache_size": 128, 00:13:12.797 "iobuf_large_cache_size": 16 00:13:12.797 } 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "method": "bdev_raid_set_options", 00:13:12.797 "params": { 00:13:12.797 "process_window_size_kb": 1024 00:13:12.797 } 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "method": "bdev_iscsi_set_options", 00:13:12.797 "params": { 00:13:12.797 "timeout_sec": 30 00:13:12.797 } 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "method": "bdev_nvme_set_options", 00:13:12.797 "params": { 00:13:12.797 "action_on_timeout": "none", 00:13:12.797 "timeout_us": 0, 00:13:12.797 "timeout_admin_us": 0, 00:13:12.797 "keep_alive_timeout_ms": 10000, 00:13:12.797 "transport_retry_count": 4, 00:13:12.797 "arbitration_burst": 0, 00:13:12.797 "low_priority_weight": 0, 00:13:12.797 "medium_priority_weight": 0, 00:13:12.797 "high_priority_weight": 0, 00:13:12.797 "nvme_adminq_poll_period_us": 10000, 00:13:12.797 "nvme_ioq_poll_period_us": 0, 00:13:12.797 "io_queue_requests": 0, 00:13:12.797 "delay_cmd_submit": true, 00:13:12.797 "bdev_retry_count": 3, 00:13:12.797 "transport_ack_timeout": 0, 00:13:12.797 "ctrlr_loss_timeout_sec": 0, 00:13:12.797 "reconnect_delay_sec": 0, 00:13:12.797 "fast_io_fail_timeout_sec": 0, 00:13:12.797 "generate_uuids": false, 00:13:12.797 "transport_tos": 0, 00:13:12.797 "io_path_stat": false, 00:13:12.797 "allow_accel_sequence": false 00:13:12.797 } 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "method": "bdev_nvme_set_hotplug", 00:13:12.797 "params": { 00:13:12.797 "period_us": 100000, 00:13:12.797 "enable": false 00:13:12.797 } 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "method": "bdev_malloc_create", 00:13:12.797 "params": { 00:13:12.797 "name": "malloc0", 00:13:12.797 "num_blocks": 8192, 00:13:12.797 "block_size": 4096, 00:13:12.797 "physical_block_size": 4096, 00:13:12.797 "uuid": "42fb0911-75f1-491d-ab30-d888a619921b", 00:13:12.797 "optimal_io_boundary": 0 00:13:12.797 } 00:13:12.797 }, 00:13:12.797 { 00:13:12.797 "method": "bdev_wait_for_examine" 00:13:12.797 } 00:13:12.798 ] 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "subsystem": "scsi", 00:13:12.798 "config": null 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "subsystem": "scheduler", 00:13:12.798 "config": [ 00:13:12.798 { 00:13:12.798 "method": "framework_set_scheduler", 00:13:12.798 "params": { 00:13:12.798 "name": "static" 00:13:12.798 } 00:13:12.798 } 00:13:12.798 ] 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "subsystem": "vhost_scsi", 00:13:12.798 "config": [] 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "subsystem": "vhost_blk", 00:13:12.798 "config": [] 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "subsystem": "ublk", 00:13:12.798 "config": [ 00:13:12.798 { 00:13:12.798 "method": "ublk_create_target", 00:13:12.798 "params": { 00:13:12.798 "cpumask": "1" 00:13:12.798 } 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "method": "ublk_start_disk", 00:13:12.798 "params": { 00:13:12.798 "bdev_name": "malloc0", 00:13:12.798 "ublk_id": 0, 00:13:12.798 "num_queues": 1, 00:13:12.798 "queue_depth": 128 00:13:12.798 } 00:13:12.798 } 00:13:12.798 ] 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "subsystem": "nbd", 00:13:12.798 "config": [] 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "subsystem": "nvmf", 00:13:12.798 "config": [ 00:13:12.798 { 00:13:12.798 "method": "nvmf_set_config", 00:13:12.798 "params": { 00:13:12.798 "discovery_filter": "match_any", 00:13:12.798 "admin_cmd_passthru": { 00:13:12.798 "identify_ctrlr": false 00:13:12.798 } 00:13:12.798 } 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "method": "nvmf_set_max_subsystems", 00:13:12.798 "params": { 00:13:12.798 "max_subsystems": 1024 00:13:12.798 } 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "method": "nvmf_set_crdt", 00:13:12.798 "params": { 00:13:12.798 "crdt1": 0, 00:13:12.798 "crdt2": 0, 00:13:12.798 "crdt3": 0 00:13:12.798 } 00:13:12.798 } 00:13:12.798 ] 00:13:12.798 }, 00:13:12.798 { 00:13:12.798 "subsystem": "iscsi", 00:13:12.798 "config": [ 00:13:12.798 { 00:13:12.798 "method": "iscsi_set_options", 00:13:12.798 "params": { 00:13:12.798 "node_base": "iqn.2016-06.io.spdk", 00:13:12.798 "max_sessions": 128, 00:13:12.798 "max_connections_per_session": 2, 00:13:12.798 "max_queue_depth": 64, 00:13:12.798 "default_time2wait": 2, 00:13:12.798 "default_time2retain": 20, 00:13:12.798 "first_burst_length": 8192, 00:13:12.798 "immediate_data": true, 00:13:12.798 "allow_duplicated_isid": false, 00:13:12.798 "error_recovery_level": 0, 00:13:12.798 "nop_timeout": 60, 00:13:12.798 "nop_in_interval": 30, 00:13:12.798 "disable_chap": false, 00:13:12.798 "require_chap": false, 00:13:12.798 "mutual_chap": false, 00:13:12.798 "chap_group": 0, 00:13:12.798 "max_large_datain_per_connection": 64, 00:13:12.798 "max_r2t_per_connection": 4, 00:13:12.798 "pdu_pool_size": 36864, 00:13:12.798 "immediate_data_pool_size": 16384, 00:13:12.798 "data_out_pool_size": 2048 00:13:12.798 } 00:13:12.798 } 00:13:12.798 ] 00:13:12.798 } 00:13:12.798 ] 00:13:12.798 }' 00:13:12.798 20:20:27 -- ublk/ublk.sh@116 -- # killprocess 69089 00:13:12.798 20:20:27 -- common/autotest_common.sh@926 -- # '[' -z 69089 ']' 00:13:12.798 20:20:27 -- common/autotest_common.sh@930 -- # kill -0 69089 00:13:12.798 20:20:27 -- common/autotest_common.sh@931 -- # uname 00:13:12.798 20:20:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:12.798 20:20:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69089 00:13:12.798 20:20:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:12.798 20:20:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:12.798 killing process with pid 69089 00:13:12.798 20:20:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69089' 00:13:12.798 20:20:27 -- common/autotest_common.sh@945 -- # kill 69089 00:13:12.798 20:20:27 -- common/autotest_common.sh@950 -- # wait 69089 00:13:13.741 [2024-10-16 20:20:28.537992] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:13.741 [2024-10-16 20:20:28.580081] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:13.741 [2024-10-16 20:20:28.580181] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:13.741 [2024-10-16 20:20:28.588117] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:13.741 [2024-10-16 20:20:28.588159] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:13.741 [2024-10-16 20:20:28.588168] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:13.741 [2024-10-16 20:20:28.588189] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:13.741 [2024-10-16 20:20:28.588292] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:15.125 20:20:29 -- ublk/ublk.sh@119 -- # tgtpid=69147 00:13:15.125 20:20:29 -- ublk/ublk.sh@121 -- # waitforlisten 69147 00:13:15.125 20:20:29 -- common/autotest_common.sh@819 -- # '[' -z 69147 ']' 00:13:15.125 20:20:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.125 20:20:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:15.125 20:20:29 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:15.125 20:20:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.125 20:20:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:15.125 20:20:29 -- common/autotest_common.sh@10 -- # set +x 00:13:15.125 20:20:29 -- ublk/ublk.sh@118 -- # echo '{ 00:13:15.125 "subsystems": [ 00:13:15.125 { 00:13:15.125 "subsystem": "iobuf", 00:13:15.125 "config": [ 00:13:15.125 { 00:13:15.125 "method": "iobuf_set_options", 00:13:15.125 "params": { 00:13:15.125 "small_pool_count": 8192, 00:13:15.125 "large_pool_count": 1024, 00:13:15.125 "small_bufsize": 8192, 00:13:15.125 "large_bufsize": 135168 00:13:15.125 } 00:13:15.125 } 00:13:15.125 ] 00:13:15.125 }, 00:13:15.125 { 00:13:15.125 "subsystem": "sock", 00:13:15.125 "config": [ 00:13:15.125 { 00:13:15.125 "method": "sock_impl_set_options", 00:13:15.125 "params": { 00:13:15.125 "impl_name": "posix", 00:13:15.125 "recv_buf_size": 2097152, 00:13:15.125 "send_buf_size": 2097152, 00:13:15.125 "enable_recv_pipe": true, 00:13:15.125 "enable_quickack": false, 00:13:15.125 "enable_placement_id": 0, 00:13:15.125 "enable_zerocopy_send_server": true, 00:13:15.125 "enable_zerocopy_send_client": false, 00:13:15.125 "zerocopy_threshold": 0, 00:13:15.125 "tls_version": 0, 00:13:15.125 "enable_ktls": false 00:13:15.125 } 00:13:15.125 }, 00:13:15.125 { 00:13:15.125 "method": "sock_impl_set_options", 00:13:15.125 "params": { 00:13:15.125 "impl_name": "ssl", 00:13:15.125 "recv_buf_size": 4096, 00:13:15.125 "send_buf_size": 4096, 00:13:15.125 "enable_recv_pipe": true, 00:13:15.126 "enable_quickack": false, 00:13:15.126 "enable_placement_id": 0, 00:13:15.126 "enable_zerocopy_send_server": true, 00:13:15.126 "enable_zerocopy_send_client": false, 00:13:15.126 "zerocopy_threshold": 0, 00:13:15.126 "tls_version": 0, 00:13:15.126 "enable_ktls": false 00:13:15.126 } 00:13:15.126 } 00:13:15.126 ] 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "vmd", 00:13:15.126 "config": [] 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "accel", 00:13:15.126 "config": [ 00:13:15.126 { 00:13:15.126 "method": "accel_set_options", 00:13:15.126 "params": { 00:13:15.126 "small_cache_size": 128, 00:13:15.126 "large_cache_size": 16, 00:13:15.126 "task_count": 2048, 00:13:15.126 "sequence_count": 2048, 00:13:15.126 "buf_count": 2048 00:13:15.126 } 00:13:15.126 } 00:13:15.126 ] 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "bdev", 00:13:15.126 "config": [ 00:13:15.126 { 00:13:15.126 "method": "bdev_set_options", 00:13:15.126 "params": { 00:13:15.126 "bdev_io_pool_size": 65535, 00:13:15.126 "bdev_io_cache_size": 256, 00:13:15.126 "bdev_auto_examine": true, 00:13:15.126 "iobuf_small_cache_size": 128, 00:13:15.126 "iobuf_large_cache_size": 16 00:13:15.126 } 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "method": "bdev_raid_set_options", 00:13:15.126 "params": { 00:13:15.126 "process_window_size_kb": 1024 00:13:15.126 } 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "method": "bdev_iscsi_set_options", 00:13:15.126 "params": { 00:13:15.126 "timeout_sec": 30 00:13:15.126 } 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "method": "bdev_nvme_set_options", 00:13:15.126 "params": { 00:13:15.126 "action_on_timeout": "none", 00:13:15.126 "timeout_us": 0, 00:13:15.126 "timeout_admin_us": 0, 00:13:15.126 "keep_alive_timeout_ms": 10000, 00:13:15.126 "transport_retry_count": 4, 00:13:15.126 "arbitration_burst": 0, 00:13:15.126 "low_priority_weight": 0, 00:13:15.126 "medium_priority_weight": 0, 00:13:15.126 "high_priority_weight": 0, 00:13:15.126 "nvme_adminq_poll_period_us": 10000, 00:13:15.126 "nvme_ioq_poll_period_us": 0, 00:13:15.126 "io_queue_requests": 0, 00:13:15.126 "delay_cmd_submit": true, 00:13:15.126 "bdev_retry_count": 3, 00:13:15.126 "transport_ack_timeout": 0, 00:13:15.126 "ctrlr_loss_timeout_sec": 0, 00:13:15.126 "reconnect_delay_sec": 0, 00:13:15.126 "fast_io_fail_timeout_sec": 0, 00:13:15.126 "generate_uuids": false, 00:13:15.126 "transport_tos": 0, 00:13:15.126 "io_path_stat": false, 00:13:15.126 "allow_accel_sequence": false 00:13:15.126 } 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "method": "bdev_nvme_set_hotplug", 00:13:15.126 "params": { 00:13:15.126 "period_us": 100000, 00:13:15.126 "enable": false 00:13:15.126 } 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "method": "bdev_malloc_create", 00:13:15.126 "params": { 00:13:15.126 "name": "malloc0", 00:13:15.126 "num_blocks": 8192, 00:13:15.126 "block_size": 4096, 00:13:15.126 "physical_block_size": 4096, 00:13:15.126 "uuid": "42fb0911-75f1-491d-ab30-d888a619921b", 00:13:15.126 "optimal_io_boundary": 0 00:13:15.126 } 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "method": "bdev_wait_for_examine" 00:13:15.126 } 00:13:15.126 ] 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "scsi", 00:13:15.126 "config": null 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "scheduler", 00:13:15.126 "config": [ 00:13:15.126 { 00:13:15.126 "method": "framework_set_scheduler", 00:13:15.126 "params": { 00:13:15.126 "name": "static" 00:13:15.126 } 00:13:15.126 } 00:13:15.126 ] 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "vhost_scsi", 00:13:15.126 "config": [] 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "vhost_blk", 00:13:15.126 "config": [] 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "ublk", 00:13:15.126 "config": [ 00:13:15.126 { 00:13:15.126 "method": "ublk_create_target", 00:13:15.126 "params": { 00:13:15.126 "cpumask": "1" 00:13:15.126 } 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "method": "ublk_start_disk", 00:13:15.126 "params": { 00:13:15.126 "bdev_name": "malloc0", 00:13:15.126 "ublk_id": 0, 00:13:15.126 "num_queues": 1, 00:13:15.126 "queue_depth": 128 00:13:15.126 } 00:13:15.126 } 00:13:15.126 ] 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "nbd", 00:13:15.126 "config": [] 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "nvmf", 00:13:15.126 "config": [ 00:13:15.126 { 00:13:15.126 "method": "nvmf_set_config", 00:13:15.126 "params": { 00:13:15.126 "discovery_filter": "match_any", 00:13:15.126 "admin_cmd_passthru": { 00:13:15.126 "identify_ctrlr": false 00:13:15.126 } 00:13:15.126 } 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "method": "nvmf_set_max_subsystems", 00:13:15.126 "params": { 00:13:15.126 "max_subsystems": 1024 00:13:15.126 } 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "method": "nvmf_set_crdt", 00:13:15.126 "params": { 00:13:15.126 "crdt1": 0, 00:13:15.126 "crdt2": 0, 00:13:15.126 "crdt3": 0 00:13:15.126 } 00:13:15.126 } 00:13:15.126 ] 00:13:15.126 }, 00:13:15.126 { 00:13:15.126 "subsystem": "iscsi", 00:13:15.126 "config": [ 00:13:15.126 { 00:13:15.126 "method": "iscsi_set_options", 00:13:15.126 "params": { 00:13:15.126 "node_base": "iqn.2016-06.io.spdk", 00:13:15.126 "max_sessions": 128, 00:13:15.126 "max_connections_per_session": 2, 00:13:15.126 "max_queue_depth": 64, 00:13:15.126 "default_time2wait": 2, 00:13:15.126 "default_time2retain": 20, 00:13:15.126 "first_burst_length": 8192, 00:13:15.126 "immediate_data": true, 00:13:15.126 "allow_duplicated_isid": false, 00:13:15.126 "error_recovery_level": 0, 00:13:15.126 "nop_timeout": 60, 00:13:15.126 "nop_in_interval": 30, 00:13:15.126 "disable_chap": false, 00:13:15.126 "require_chap": false, 00:13:15.126 "mutual_chap": false, 00:13:15.126 "chap_group": 0, 00:13:15.126 "max_large_datain_per_connection": 64, 00:13:15.126 "max_r2t_per_connection": 4, 00:13:15.126 "pdu_pool_size": 36864, 00:13:15.126 "immediate_data_pool_size": 16384, 00:13:15.126 "data_out_pool_size": 2048 00:13:15.126 } 00:13:15.126 } 00:13:15.126 ] 00:13:15.126 } 00:13:15.126 ] 00:13:15.126 }' 00:13:15.126 [2024-10-16 20:20:29.841070] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:15.126 [2024-10-16 20:20:29.841179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69147 ] 00:13:15.126 [2024-10-16 20:20:29.987800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.387 [2024-10-16 20:20:30.157719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:15.387 [2024-10-16 20:20:30.157868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.960 [2024-10-16 20:20:30.741643] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:15.960 [2024-10-16 20:20:30.748137] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:15.960 [2024-10-16 20:20:30.748195] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:15.960 [2024-10-16 20:20:30.748202] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:15.960 [2024-10-16 20:20:30.748207] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:15.960 [2024-10-16 20:20:30.757138] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:15.960 [2024-10-16 20:20:30.757154] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:15.960 [2024-10-16 20:20:30.765063] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:15.960 [2024-10-16 20:20:30.765142] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:15.960 [2024-10-16 20:20:30.782069] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:16.532 20:20:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:16.532 20:20:31 -- common/autotest_common.sh@852 -- # return 0 00:13:16.532 20:20:31 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:16.532 20:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.532 20:20:31 -- common/autotest_common.sh@10 -- # set +x 00:13:16.532 20:20:31 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:16.532 20:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.532 20:20:31 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:16.532 20:20:31 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:16.532 20:20:31 -- ublk/ublk.sh@125 -- # killprocess 69147 00:13:16.532 20:20:31 -- common/autotest_common.sh@926 -- # '[' -z 69147 ']' 00:13:16.532 20:20:31 -- common/autotest_common.sh@930 -- # kill -0 69147 00:13:16.532 20:20:31 -- common/autotest_common.sh@931 -- # uname 00:13:16.532 20:20:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:16.532 20:20:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69147 00:13:16.532 20:20:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:16.532 killing process with pid 69147 00:13:16.532 20:20:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:16.532 20:20:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69147' 00:13:16.532 20:20:31 -- common/autotest_common.sh@945 -- # kill 69147 00:13:16.532 20:20:31 -- common/autotest_common.sh@950 -- # wait 69147 00:13:17.475 [2024-10-16 20:20:32.321960] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:17.475 [2024-10-16 20:20:32.361109] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:17.475 [2024-10-16 20:20:32.361221] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:17.475 [2024-10-16 20:20:32.371067] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:17.475 [2024-10-16 20:20:32.371107] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:17.475 [2024-10-16 20:20:32.371112] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:17.475 [2024-10-16 20:20:32.371127] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:17.475 [2024-10-16 20:20:32.371234] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:18.863 20:20:33 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:18.863 00:13:18.863 real 0m7.971s 00:13:18.863 user 0m5.814s 00:13:18.863 sys 0m3.112s 00:13:18.863 20:20:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.863 20:20:33 -- common/autotest_common.sh@10 -- # set +x 00:13:18.863 ************************************ 00:13:18.863 END TEST test_save_ublk_config 00:13:18.863 ************************************ 00:13:18.863 20:20:33 -- ublk/ublk.sh@139 -- # spdk_pid=69227 00:13:18.863 20:20:33 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:18.863 20:20:33 -- ublk/ublk.sh@141 -- # waitforlisten 69227 00:13:18.863 20:20:33 -- common/autotest_common.sh@819 -- # '[' -z 69227 ']' 00:13:18.863 20:20:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.863 20:20:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:18.863 20:20:33 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:18.863 20:20:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.863 20:20:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:18.863 20:20:33 -- common/autotest_common.sh@10 -- # set +x 00:13:18.863 [2024-10-16 20:20:33.665324] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:18.863 [2024-10-16 20:20:33.665459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69227 ] 00:13:19.125 [2024-10-16 20:20:33.815713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:19.125 [2024-10-16 20:20:34.032293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:19.125 [2024-10-16 20:20:34.032973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.125 [2024-10-16 20:20:34.033078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.514 20:20:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:20.514 20:20:35 -- common/autotest_common.sh@852 -- # return 0 00:13:20.514 20:20:35 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:20.514 20:20:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:20.514 20:20:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:20.514 20:20:35 -- common/autotest_common.sh@10 -- # set +x 00:13:20.514 ************************************ 00:13:20.514 START TEST test_create_ublk 00:13:20.514 ************************************ 00:13:20.514 20:20:35 -- common/autotest_common.sh@1104 -- # test_create_ublk 00:13:20.514 20:20:35 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:20.514 20:20:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.514 20:20:35 -- common/autotest_common.sh@10 -- # set +x 00:13:20.515 [2024-10-16 20:20:35.193875] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:20.515 20:20:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.515 20:20:35 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:20.515 20:20:35 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:20.515 20:20:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.515 20:20:35 -- common/autotest_common.sh@10 -- # set +x 00:13:20.515 20:20:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.515 20:20:35 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:20.515 20:20:35 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:20.515 20:20:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.515 20:20:35 -- common/autotest_common.sh@10 -- # set +x 00:13:20.515 [2024-10-16 20:20:35.397184] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:20.515 [2024-10-16 20:20:35.397543] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:20.515 [2024-10-16 20:20:35.397555] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:20.515 [2024-10-16 20:20:35.397564] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:20.515 [2024-10-16 20:20:35.405323] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:20.515 [2024-10-16 20:20:35.405347] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:20.515 [2024-10-16 20:20:35.413068] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:20.515 [2024-10-16 20:20:35.430270] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:20.776 [2024-10-16 20:20:35.446068] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:20.776 20:20:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.776 20:20:35 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:20.776 20:20:35 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:20.776 20:20:35 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:20.776 20:20:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.776 20:20:35 -- common/autotest_common.sh@10 -- # set +x 00:13:20.776 20:20:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.776 20:20:35 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:20.776 { 00:13:20.776 "ublk_device": "/dev/ublkb0", 00:13:20.776 "id": 0, 00:13:20.776 "queue_depth": 512, 00:13:20.776 "num_queues": 4, 00:13:20.776 "bdev_name": "Malloc0" 00:13:20.776 } 00:13:20.776 ]' 00:13:20.776 20:20:35 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:20.776 20:20:35 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:20.776 20:20:35 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:20.776 20:20:35 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:20.776 20:20:35 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:20.776 20:20:35 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:20.776 20:20:35 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:20.776 20:20:35 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:20.776 20:20:35 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:20.776 20:20:35 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:20.776 20:20:35 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:20.776 20:20:35 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:20.776 20:20:35 -- lvol/common.sh@41 -- # local offset=0 00:13:20.776 20:20:35 -- lvol/common.sh@42 -- # local size=134217728 00:13:20.776 20:20:35 -- lvol/common.sh@43 -- # local rw=write 00:13:20.776 20:20:35 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:20.777 20:20:35 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:20.777 20:20:35 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:20.777 20:20:35 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:20.777 20:20:35 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:20.777 20:20:35 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:20.777 20:20:35 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:21.038 fio: verification read phase will never start because write phase uses all of runtime 00:13:21.038 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:21.038 fio-3.35 00:13:21.038 Starting 1 process 00:13:31.040 00:13:31.040 fio_test: (groupid=0, jobs=1): err= 0: pid=69281: Wed Oct 16 20:20:45 2024 00:13:31.040 write: IOPS=14.7k, BW=57.6MiB/s (60.4MB/s)(576MiB/10001msec); 0 zone resets 00:13:31.040 clat (usec): min=42, max=4053, avg=67.07, stdev=95.29 00:13:31.040 lat (usec): min=42, max=4053, avg=67.49, stdev=95.30 00:13:31.040 clat percentiles (usec): 00:13:31.040 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 58], 00:13:31.040 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 64], 00:13:31.040 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 74], 95.00th=[ 78], 00:13:31.040 | 99.00th=[ 99], 99.50th=[ 118], 99.90th=[ 1876], 99.95th=[ 2802], 00:13:31.040 | 99.99th=[ 3589] 00:13:31.040 bw ( KiB/s): min=50578, max=64000, per=99.93%, avg=58946.21, stdev=3855.65, samples=19 00:13:31.040 iops : min=12644, max=16000, avg=14736.53, stdev=963.95, samples=19 00:13:31.040 lat (usec) : 50=1.72%, 100=97.32%, 250=0.74%, 500=0.04%, 750=0.01% 00:13:31.040 lat (usec) : 1000=0.02% 00:13:31.040 lat (msec) : 2=0.07%, 4=0.09%, 10=0.01% 00:13:31.040 cpu : usr=2.17%, sys=13.84%, ctx=147489, majf=0, minf=795 00:13:31.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.040 issued rwts: total=0,147476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.040 00:13:31.040 Run status group 0 (all jobs): 00:13:31.040 WRITE: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=576MiB (604MB), run=10001-10001msec 00:13:31.040 00:13:31.040 Disk stats (read/write): 00:13:31.040 ublkb0: ios=0/145901, merge=0/0, ticks=0/8207, in_queue=8207, util=99.10% 00:13:31.040 20:20:45 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:31.040 20:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.040 20:20:45 -- common/autotest_common.sh@10 -- # set +x 00:13:31.040 [2024-10-16 20:20:45.863484] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:31.040 [2024-10-16 20:20:45.906583] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:31.040 [2024-10-16 20:20:45.907607] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:31.041 [2024-10-16 20:20:45.915064] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:31.041 [2024-10-16 20:20:45.915303] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:31.041 [2024-10-16 20:20:45.915316] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:31.041 20:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.041 20:20:45 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:31.041 20:20:45 -- common/autotest_common.sh@640 -- # local es=0 00:13:31.041 20:20:45 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:31.041 20:20:45 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:13:31.041 20:20:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:31.041 20:20:45 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:13:31.041 20:20:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:31.041 20:20:45 -- common/autotest_common.sh@643 -- # rpc_cmd ublk_stop_disk 0 00:13:31.041 20:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.041 20:20:45 -- common/autotest_common.sh@10 -- # set +x 00:13:31.041 [2024-10-16 20:20:45.930160] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:31.041 request: 00:13:31.041 { 00:13:31.041 "ublk_id": 0, 00:13:31.041 "method": "ublk_stop_disk", 00:13:31.041 "req_id": 1 00:13:31.041 } 00:13:31.041 Got JSON-RPC error response 00:13:31.041 response: 00:13:31.041 { 00:13:31.041 "code": -19, 00:13:31.041 "message": "No such device" 00:13:31.041 } 00:13:31.041 20:20:45 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:31.041 20:20:45 -- common/autotest_common.sh@643 -- # es=1 00:13:31.041 20:20:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:31.041 20:20:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:31.041 20:20:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:31.041 20:20:45 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:31.041 20:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.041 20:20:45 -- common/autotest_common.sh@10 -- # set +x 00:13:31.041 [2024-10-16 20:20:45.947111] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:31.041 [2024-10-16 20:20:45.955471] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:31.041 [2024-10-16 20:20:45.955499] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:31.041 20:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.041 20:20:45 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:31.041 20:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.041 20:20:45 -- common/autotest_common.sh@10 -- # set +x 00:13:31.608 20:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.608 20:20:46 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:31.608 20:20:46 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:31.608 20:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.608 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.608 20:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.608 20:20:46 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:31.608 20:20:46 -- lvol/common.sh@26 -- # jq length 00:13:31.608 20:20:46 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:31.608 20:20:46 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:31.608 20:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.608 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.608 20:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.608 20:20:46 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:31.608 20:20:46 -- lvol/common.sh@28 -- # jq length 00:13:31.608 ************************************ 00:13:31.608 END TEST test_create_ublk 00:13:31.608 ************************************ 00:13:31.608 20:20:46 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:31.608 00:13:31.608 real 0m11.215s 00:13:31.608 user 0m0.506s 00:13:31.608 sys 0m1.465s 00:13:31.608 20:20:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.608 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.608 20:20:46 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:31.608 20:20:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:31.608 20:20:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.608 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.608 ************************************ 00:13:31.608 START TEST test_create_multi_ublk 00:13:31.608 ************************************ 00:13:31.608 20:20:46 -- common/autotest_common.sh@1104 -- # test_create_multi_ublk 00:13:31.608 20:20:46 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:31.608 20:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.608 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.608 [2024-10-16 20:20:46.447565] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:31.608 20:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.608 20:20:46 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:31.608 20:20:46 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:31.608 20:20:46 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:31.608 20:20:46 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:31.608 20:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.608 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.866 20:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.866 20:20:46 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:31.866 20:20:46 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:31.866 20:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.866 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.866 [2024-10-16 20:20:46.662163] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:31.866 [2024-10-16 20:20:46.662477] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:31.866 [2024-10-16 20:20:46.662489] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:31.866 [2024-10-16 20:20:46.662496] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:31.866 [2024-10-16 20:20:46.674077] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:31.866 [2024-10-16 20:20:46.674098] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:31.866 [2024-10-16 20:20:46.686064] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:31.866 [2024-10-16 20:20:46.686562] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:31.866 [2024-10-16 20:20:46.734067] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:31.866 20:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.866 20:20:46 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:31.866 20:20:46 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:31.866 20:20:46 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:31.866 20:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.866 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:13:32.125 20:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.125 20:20:46 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:32.125 20:20:46 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:32.125 20:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.125 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:13:32.125 [2024-10-16 20:20:46.960153] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:32.125 [2024-10-16 20:20:46.960451] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:32.125 [2024-10-16 20:20:46.960464] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:32.125 [2024-10-16 20:20:46.960469] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:32.125 [2024-10-16 20:20:46.968086] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:32.125 [2024-10-16 20:20:46.968102] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:32.125 [2024-10-16 20:20:46.976076] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:32.125 [2024-10-16 20:20:46.976572] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:32.125 [2024-10-16 20:20:47.000077] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:32.125 20:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.125 20:20:47 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:32.125 20:20:47 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:32.125 20:20:47 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:32.125 20:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.125 20:20:47 -- common/autotest_common.sh@10 -- # set +x 00:13:32.384 20:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.384 20:20:47 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:32.384 20:20:47 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:32.384 20:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.384 20:20:47 -- common/autotest_common.sh@10 -- # set +x 00:13:32.384 [2024-10-16 20:20:47.160160] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:32.384 [2024-10-16 20:20:47.160455] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:32.384 [2024-10-16 20:20:47.160467] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:32.384 [2024-10-16 20:20:47.160474] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:32.384 [2024-10-16 20:20:47.168076] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:32.384 [2024-10-16 20:20:47.168094] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:32.384 [2024-10-16 20:20:47.176065] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:32.384 [2024-10-16 20:20:47.176562] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:32.384 [2024-10-16 20:20:47.185102] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:32.384 20:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.384 20:20:47 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:32.384 20:20:47 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:32.384 20:20:47 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:32.384 20:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.384 20:20:47 -- common/autotest_common.sh@10 -- # set +x 00:13:32.642 20:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.642 20:20:47 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:32.642 20:20:47 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:32.642 20:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.642 20:20:47 -- common/autotest_common.sh@10 -- # set +x 00:13:32.642 [2024-10-16 20:20:47.344155] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:32.642 [2024-10-16 20:20:47.344447] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:32.642 [2024-10-16 20:20:47.344458] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:32.642 [2024-10-16 20:20:47.344464] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:32.642 [2024-10-16 20:20:47.352072] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:32.642 [2024-10-16 20:20:47.352087] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:32.642 [2024-10-16 20:20:47.360077] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:32.642 [2024-10-16 20:20:47.360567] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:32.642 [2024-10-16 20:20:47.369079] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:32.642 20:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.642 20:20:47 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:32.642 20:20:47 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:32.642 20:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.642 20:20:47 -- common/autotest_common.sh@10 -- # set +x 00:13:32.642 20:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.642 20:20:47 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:32.642 { 00:13:32.642 "ublk_device": "/dev/ublkb0", 00:13:32.642 "id": 0, 00:13:32.642 "queue_depth": 512, 00:13:32.642 "num_queues": 4, 00:13:32.642 "bdev_name": "Malloc0" 00:13:32.642 }, 00:13:32.642 { 00:13:32.642 "ublk_device": "/dev/ublkb1", 00:13:32.642 "id": 1, 00:13:32.642 "queue_depth": 512, 00:13:32.642 "num_queues": 4, 00:13:32.642 "bdev_name": "Malloc1" 00:13:32.642 }, 00:13:32.642 { 00:13:32.642 "ublk_device": "/dev/ublkb2", 00:13:32.642 "id": 2, 00:13:32.642 "queue_depth": 512, 00:13:32.642 "num_queues": 4, 00:13:32.642 "bdev_name": "Malloc2" 00:13:32.642 }, 00:13:32.642 { 00:13:32.642 "ublk_device": "/dev/ublkb3", 00:13:32.642 "id": 3, 00:13:32.642 "queue_depth": 512, 00:13:32.642 "num_queues": 4, 00:13:32.642 "bdev_name": "Malloc3" 00:13:32.642 } 00:13:32.642 ]' 00:13:32.642 20:20:47 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:32.642 20:20:47 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:32.642 20:20:47 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:32.642 20:20:47 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:32.642 20:20:47 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:32.642 20:20:47 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:32.642 20:20:47 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:32.642 20:20:47 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:32.642 20:20:47 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:32.642 20:20:47 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:32.642 20:20:47 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:32.642 20:20:47 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:32.642 20:20:47 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:32.642 20:20:47 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:32.900 20:20:47 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:32.900 20:20:47 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:32.900 20:20:47 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:32.900 20:20:47 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:32.900 20:20:47 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:32.900 20:20:47 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:32.900 20:20:47 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:32.900 20:20:47 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:32.900 20:20:47 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:32.900 20:20:47 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:32.900 20:20:47 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:32.900 20:20:47 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:32.900 20:20:47 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:32.900 20:20:47 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:32.900 20:20:47 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:32.900 20:20:47 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:32.900 20:20:47 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:33.159 20:20:47 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:33.159 20:20:47 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:33.159 20:20:47 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:33.159 20:20:47 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:33.159 20:20:47 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:33.159 20:20:47 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:33.159 20:20:47 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:33.159 20:20:47 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:33.159 20:20:47 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:33.159 20:20:47 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:33.159 20:20:47 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:33.159 20:20:47 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:33.159 20:20:47 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:33.159 20:20:48 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:33.159 20:20:48 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:33.159 20:20:48 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:33.159 20:20:48 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:33.159 20:20:48 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:33.159 20:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.159 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:13:33.159 [2024-10-16 20:20:48.024127] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:33.159 [2024-10-16 20:20:48.066633] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:33.159 [2024-10-16 20:20:48.067846] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:33.159 [2024-10-16 20:20:48.072068] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:33.159 [2024-10-16 20:20:48.072330] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:33.159 [2024-10-16 20:20:48.072343] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:33.159 20:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.159 20:20:48 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:33.159 20:20:48 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:33.159 20:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.159 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:13:33.159 [2024-10-16 20:20:48.087139] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:33.417 [2024-10-16 20:20:48.120592] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:33.417 [2024-10-16 20:20:48.121630] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:33.417 [2024-10-16 20:20:48.128071] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:33.417 [2024-10-16 20:20:48.128306] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:33.417 [2024-10-16 20:20:48.128318] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:33.417 20:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.417 20:20:48 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:33.417 20:20:48 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:33.417 20:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.417 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:13:33.417 [2024-10-16 20:20:48.144113] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:33.417 [2024-10-16 20:20:48.173580] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:33.417 [2024-10-16 20:20:48.174683] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:33.417 [2024-10-16 20:20:48.180129] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:33.417 [2024-10-16 20:20:48.180357] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:33.417 [2024-10-16 20:20:48.180369] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:33.417 20:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.417 20:20:48 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:33.417 20:20:48 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:33.417 20:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.417 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:13:33.417 [2024-10-16 20:20:48.193132] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:33.417 [2024-10-16 20:20:48.232096] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:33.417 [2024-10-16 20:20:48.232764] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:33.417 [2024-10-16 20:20:48.240068] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:33.417 [2024-10-16 20:20:48.240287] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:33.417 [2024-10-16 20:20:48.240299] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:33.417 20:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.417 20:20:48 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:33.676 [2024-10-16 20:20:48.424126] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:33.676 [2024-10-16 20:20:48.432464] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:33.676 [2024-10-16 20:20:48.432488] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:33.676 20:20:48 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:33.676 20:20:48 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:33.676 20:20:48 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:33.676 20:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.676 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:13:33.934 20:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.934 20:20:48 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:33.934 20:20:48 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:33.934 20:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.934 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.501 20:20:49 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:34.501 20:20:49 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:34.501 20:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.501 20:20:49 -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.501 20:20:49 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:34.501 20:20:49 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:34.501 20:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.501 20:20:49 -- common/autotest_common.sh@10 -- # set +x 00:13:34.759 20:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.759 20:20:49 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:34.759 20:20:49 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:34.759 20:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.759 20:20:49 -- common/autotest_common.sh@10 -- # set +x 00:13:34.759 20:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.759 20:20:49 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:34.759 20:20:49 -- lvol/common.sh@26 -- # jq length 00:13:34.759 20:20:49 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:34.759 20:20:49 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:34.759 20:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.759 20:20:49 -- common/autotest_common.sh@10 -- # set +x 00:13:34.759 20:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.759 20:20:49 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:34.759 20:20:49 -- lvol/common.sh@28 -- # jq length 00:13:34.759 ************************************ 00:13:34.759 END TEST test_create_multi_ublk 00:13:34.759 ************************************ 00:13:34.759 20:20:49 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:34.759 00:13:34.759 real 0m3.184s 00:13:34.759 user 0m0.792s 00:13:34.759 sys 0m0.138s 00:13:34.759 20:20:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.759 20:20:49 -- common/autotest_common.sh@10 -- # set +x 00:13:34.759 20:20:49 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:34.759 20:20:49 -- ublk/ublk.sh@147 -- # cleanup 00:13:34.759 20:20:49 -- ublk/ublk.sh@130 -- # killprocess 69227 00:13:34.759 20:20:49 -- common/autotest_common.sh@926 -- # '[' -z 69227 ']' 00:13:34.759 20:20:49 -- common/autotest_common.sh@930 -- # kill -0 69227 00:13:34.759 20:20:49 -- common/autotest_common.sh@931 -- # uname 00:13:34.759 20:20:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:34.759 20:20:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69227 00:13:34.759 killing process with pid 69227 00:13:34.759 20:20:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:34.759 20:20:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:34.759 20:20:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69227' 00:13:34.759 20:20:49 -- common/autotest_common.sh@945 -- # kill 69227 00:13:34.759 20:20:49 -- common/autotest_common.sh@950 -- # wait 69227 00:13:35.324 [2024-10-16 20:20:50.190871] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:35.324 [2024-10-16 20:20:50.190927] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:36.261 00:13:36.261 real 0m25.388s 00:13:36.261 user 0m35.905s 00:13:36.261 sys 0m10.250s 00:13:36.261 20:20:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.261 20:20:50 -- common/autotest_common.sh@10 -- # set +x 00:13:36.261 ************************************ 00:13:36.261 END TEST ublk 00:13:36.261 ************************************ 00:13:36.261 20:20:50 -- spdk/autotest.sh@260 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:36.261 20:20:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:36.261 20:20:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:36.261 20:20:50 -- common/autotest_common.sh@10 -- # set +x 00:13:36.261 ************************************ 00:13:36.261 START TEST ublk_recovery 00:13:36.261 ************************************ 00:13:36.261 20:20:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:36.261 * Looking for test storage... 00:13:36.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:36.261 20:20:50 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:36.261 20:20:50 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:36.261 20:20:50 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:36.261 20:20:50 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:36.261 20:20:50 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:36.261 20:20:50 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:36.261 20:20:50 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:36.261 20:20:50 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:36.261 20:20:50 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:36.261 20:20:50 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:36.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.261 20:20:50 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69618 00:13:36.261 20:20:50 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.261 20:20:50 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69618 00:13:36.261 20:20:50 -- common/autotest_common.sh@819 -- # '[' -z 69618 ']' 00:13:36.261 20:20:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.261 20:20:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:36.261 20:20:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.261 20:20:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:36.261 20:20:50 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:36.261 20:20:50 -- common/autotest_common.sh@10 -- # set +x 00:13:36.261 [2024-10-16 20:20:51.077953] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:36.261 [2024-10-16 20:20:51.078375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69618 ] 00:13:36.522 [2024-10-16 20:20:51.225982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:36.522 [2024-10-16 20:20:51.365503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:36.522 [2024-10-16 20:20:51.365887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.522 [2024-10-16 20:20:51.366011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.113 20:20:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:37.113 20:20:51 -- common/autotest_common.sh@852 -- # return 0 00:13:37.113 20:20:51 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:37.113 20:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.113 20:20:51 -- common/autotest_common.sh@10 -- # set +x 00:13:37.113 [2024-10-16 20:20:51.889520] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:37.113 20:20:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.113 20:20:51 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:37.113 20:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.113 20:20:51 -- common/autotest_common.sh@10 -- # set +x 00:13:37.113 malloc0 00:13:37.113 20:20:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.113 20:20:51 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:37.113 20:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.113 20:20:51 -- common/autotest_common.sh@10 -- # set +x 00:13:37.113 [2024-10-16 20:20:51.976166] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:37.113 [2024-10-16 20:20:51.976253] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:37.113 [2024-10-16 20:20:51.976259] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:37.113 [2024-10-16 20:20:51.976266] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:37.113 [2024-10-16 20:20:51.984178] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:37.113 [2024-10-16 20:20:51.984197] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:37.113 [2024-10-16 20:20:51.992071] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:37.114 [2024-10-16 20:20:51.992187] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:37.114 [2024-10-16 20:20:52.009070] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:37.114 1 00:13:37.114 20:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.114 20:20:52 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:38.501 20:20:53 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69653 00:13:38.501 20:20:53 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:38.501 20:20:53 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:38.501 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:38.501 fio-3.35 00:13:38.501 Starting 1 process 00:13:43.771 20:20:58 -- ublk/ublk_recovery.sh@36 -- # kill -9 69618 00:13:43.771 20:20:58 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:49.110 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69618 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:49.110 20:21:03 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69766 00:13:49.110 20:21:03 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:49.110 20:21:03 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:49.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.110 20:21:03 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69766 00:13:49.110 20:21:03 -- common/autotest_common.sh@819 -- # '[' -z 69766 ']' 00:13:49.110 20:21:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.110 20:21:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:49.110 20:21:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.110 20:21:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:49.110 20:21:03 -- common/autotest_common.sh@10 -- # set +x 00:13:49.110 [2024-10-16 20:21:03.123614] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:49.110 [2024-10-16 20:21:03.123764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69766 ] 00:13:49.110 [2024-10-16 20:21:03.287350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:49.110 [2024-10-16 20:21:03.469467] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:49.110 [2024-10-16 20:21:03.469871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.110 [2024-10-16 20:21:03.469986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.053 20:21:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:50.053 20:21:04 -- common/autotest_common.sh@852 -- # return 0 00:13:50.053 20:21:04 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:50.053 20:21:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.053 20:21:04 -- common/autotest_common.sh@10 -- # set +x 00:13:50.053 [2024-10-16 20:21:04.628309] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:50.053 20:21:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.053 20:21:04 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:50.053 20:21:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.053 20:21:04 -- common/autotest_common.sh@10 -- # set +x 00:13:50.053 malloc0 00:13:50.053 20:21:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.053 20:21:04 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:50.053 20:21:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.053 20:21:04 -- common/autotest_common.sh@10 -- # set +x 00:13:50.053 [2024-10-16 20:21:04.754272] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:50.053 [2024-10-16 20:21:04.754330] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:50.053 [2024-10-16 20:21:04.754340] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:50.053 [2024-10-16 20:21:04.760081] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:50.053 [2024-10-16 20:21:04.760102] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:50.053 [2024-10-16 20:21:04.760199] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:50.053 1 00:13:50.053 20:21:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.053 20:21:04 -- ublk/ublk_recovery.sh@52 -- # wait 69653 00:14:16.593 [2024-10-16 20:21:28.179064] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:16.593 [2024-10-16 20:21:28.183209] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:16.593 [2024-10-16 20:21:28.189237] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:16.593 [2024-10-16 20:21:28.189259] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:38.520 00:14:38.520 fio_test: (groupid=0, jobs=1): err= 0: pid=69656: Wed Oct 16 20:21:53 2024 00:14:38.520 read: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(3585MiB/60001msec) 00:14:38.520 slat (nsec): min=1105, max=152709, avg=4829.95, stdev=1392.63 00:14:38.520 clat (usec): min=1109, max=30175k, avg=4339.76, stdev=263502.76 00:14:38.520 lat (usec): min=1116, max=30175k, avg=4344.59, stdev=263502.76 00:14:38.520 clat percentiles (usec): 00:14:38.520 | 1.00th=[ 1729], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1876], 00:14:38.520 | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[ 1926], 60.00th=[ 1926], 00:14:38.520 | 70.00th=[ 1942], 80.00th=[ 1958], 90.00th=[ 2008], 95.00th=[ 2900], 00:14:38.520 | 99.00th=[ 4948], 99.50th=[ 5538], 99.90th=[ 6259], 99.95th=[ 7767], 00:14:38.520 | 99.99th=[12911] 00:14:38.520 bw ( KiB/s): min= 6064, max=127464, per=100.00%, avg=120411.20, stdev=19372.26, samples=60 00:14:38.520 iops : min= 1516, max=31866, avg=30102.80, stdev=4843.06, samples=60 00:14:38.520 write: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(3579MiB/60001msec); 0 zone resets 00:14:38.520 slat (nsec): min=1134, max=164945, avg=4853.89, stdev=1389.17 00:14:38.520 clat (usec): min=1092, max=30175k, avg=4024.52, stdev=240047.64 00:14:38.520 lat (usec): min=1097, max=30175k, avg=4029.37, stdev=240047.64 00:14:38.520 clat percentiles (usec): 00:14:38.520 | 1.00th=[ 1762], 5.00th=[ 1909], 10.00th=[ 1942], 20.00th=[ 1958], 00:14:38.520 | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2008], 60.00th=[ 2024], 00:14:38.520 | 70.00th=[ 2040], 80.00th=[ 2057], 90.00th=[ 2089], 95.00th=[ 2769], 00:14:38.520 | 99.00th=[ 5014], 99.50th=[ 5604], 99.90th=[ 6390], 99.95th=[ 7832], 00:14:38.520 | 99.99th=[12911] 00:14:38.520 bw ( KiB/s): min= 6040, max=126744, per=100.00%, avg=120249.87, stdev=19445.45, samples=60 00:14:38.520 iops : min= 1510, max=31686, avg=30062.47, stdev=4861.36, samples=60 00:14:38.520 lat (msec) : 2=66.35%, 4=31.06%, 10=2.57%, 20=0.01%, >=2000=0.01% 00:14:38.520 cpu : usr=3.41%, sys=14.95%, ctx=60286, majf=0, minf=13 00:14:38.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:38.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.520 issued rwts: total=917727,916280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.520 00:14:38.520 Run status group 0 (all jobs): 00:14:38.520 READ: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=3585MiB (3759MB), run=60001-60001msec 00:14:38.520 WRITE: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=3579MiB (3753MB), run=60001-60001msec 00:14:38.520 00:14:38.520 Disk stats (read/write): 00:14:38.520 ublkb1: ios=914398/912885, merge=0/0, ticks=3932317/3563339, in_queue=7495656, util=99.89% 00:14:38.520 20:21:53 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:38.520 20:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.520 20:21:53 -- common/autotest_common.sh@10 -- # set +x 00:14:38.520 [2024-10-16 20:21:53.263909] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:38.520 [2024-10-16 20:21:53.312084] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:38.520 [2024-10-16 20:21:53.312241] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:38.520 [2024-10-16 20:21:53.322060] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:38.520 [2024-10-16 20:21:53.322184] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:38.520 [2024-10-16 20:21:53.322194] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:38.520 20:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.520 20:21:53 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:38.520 20:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.520 20:21:53 -- common/autotest_common.sh@10 -- # set +x 00:14:38.520 [2024-10-16 20:21:53.329127] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:38.520 [2024-10-16 20:21:53.333362] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:38.520 [2024-10-16 20:21:53.333390] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:38.520 20:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.520 20:21:53 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:38.520 20:21:53 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:38.520 20:21:53 -- ublk/ublk_recovery.sh@14 -- # killprocess 69766 00:14:38.520 20:21:53 -- common/autotest_common.sh@926 -- # '[' -z 69766 ']' 00:14:38.520 20:21:53 -- common/autotest_common.sh@930 -- # kill -0 69766 00:14:38.520 20:21:53 -- common/autotest_common.sh@931 -- # uname 00:14:38.520 20:21:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:38.520 20:21:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69766 00:14:38.520 20:21:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:38.520 killing process with pid 69766 00:14:38.520 20:21:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:38.520 20:21:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69766' 00:14:38.520 20:21:53 -- common/autotest_common.sh@945 -- # kill 69766 00:14:38.520 20:21:53 -- common/autotest_common.sh@950 -- # wait 69766 00:14:39.455 [2024-10-16 20:21:54.382328] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:39.455 [2024-10-16 20:21:54.382377] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:40.393 00:14:40.393 real 1m4.180s 00:14:40.393 user 1m49.771s 00:14:40.393 sys 0m18.889s 00:14:40.393 ************************************ 00:14:40.393 END TEST ublk_recovery 00:14:40.393 ************************************ 00:14:40.393 20:21:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.393 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:14:40.393 20:21:55 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@268 -- # timing_exit lib 00:14:40.393 20:21:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:40.393 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:14:40.393 20:21:55 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:14:40.393 20:21:55 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:40.393 20:21:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:40.393 20:21:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:40.393 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:14:40.393 ************************************ 00:14:40.393 START TEST ftl 00:14:40.393 ************************************ 00:14:40.393 20:21:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:40.393 * Looking for test storage... 00:14:40.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:40.393 20:21:55 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:40.393 20:21:55 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:40.393 20:21:55 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:40.393 20:21:55 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:40.393 20:21:55 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:40.393 20:21:55 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:40.394 20:21:55 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.394 20:21:55 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:40.394 20:21:55 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:40.394 20:21:55 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.394 20:21:55 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.394 20:21:55 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:40.394 20:21:55 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:40.394 20:21:55 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:40.394 20:21:55 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:40.394 20:21:55 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:40.394 20:21:55 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:40.394 20:21:55 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.394 20:21:55 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.394 20:21:55 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:40.394 20:21:55 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:40.394 20:21:55 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:40.394 20:21:55 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:40.394 20:21:55 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:40.394 20:21:55 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:40.394 20:21:55 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:40.394 20:21:55 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:40.394 20:21:55 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:40.394 20:21:55 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:40.394 20:21:55 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.394 20:21:55 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:40.394 20:21:55 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:40.394 20:21:55 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:40.394 20:21:55 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:40.394 20:21:55 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:40.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:40.963 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:40.963 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:40.963 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:40.963 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:40.963 20:21:55 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70562 00:14:40.963 20:21:55 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:40.963 20:21:55 -- ftl/ftl.sh@38 -- # waitforlisten 70562 00:14:40.963 20:21:55 -- common/autotest_common.sh@819 -- # '[' -z 70562 ']' 00:14:40.963 20:21:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.963 20:21:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:40.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.963 20:21:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.963 20:21:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:40.963 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:14:40.963 [2024-10-16 20:21:55.886957] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:40.963 [2024-10-16 20:21:55.887125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70562 ] 00:14:41.224 [2024-10-16 20:21:56.039907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.486 [2024-10-16 20:21:56.273206] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:41.486 [2024-10-16 20:21:56.273439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.058 20:21:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:42.058 20:21:56 -- common/autotest_common.sh@852 -- # return 0 00:14:42.058 20:21:56 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:42.058 20:21:56 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:43.002 20:21:57 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:43.002 20:21:57 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:43.574 20:21:58 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:43.574 20:21:58 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:43.574 20:21:58 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:43.574 20:21:58 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:43.574 20:21:58 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:43.574 20:21:58 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:43.574 20:21:58 -- ftl/ftl.sh@50 -- # break 00:14:43.574 20:21:58 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:43.574 20:21:58 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:43.574 20:21:58 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:43.575 20:21:58 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:43.853 20:21:58 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:43.853 20:21:58 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:43.853 20:21:58 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:43.853 20:21:58 -- ftl/ftl.sh@63 -- # break 00:14:43.853 20:21:58 -- ftl/ftl.sh@66 -- # killprocess 70562 00:14:43.853 20:21:58 -- common/autotest_common.sh@926 -- # '[' -z 70562 ']' 00:14:43.853 20:21:58 -- common/autotest_common.sh@930 -- # kill -0 70562 00:14:43.853 20:21:58 -- common/autotest_common.sh@931 -- # uname 00:14:43.853 20:21:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:43.853 20:21:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70562 00:14:43.853 killing process with pid 70562 00:14:43.853 20:21:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:43.853 20:21:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:43.853 20:21:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70562' 00:14:43.853 20:21:58 -- common/autotest_common.sh@945 -- # kill 70562 00:14:43.853 20:21:58 -- common/autotest_common.sh@950 -- # wait 70562 00:14:44.847 20:21:59 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:44.847 20:21:59 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:44.847 20:21:59 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:44.847 20:21:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:44.847 20:21:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.847 20:21:59 -- common/autotest_common.sh@10 -- # set +x 00:14:45.109 ************************************ 00:14:45.109 START TEST ftl_fio_basic 00:14:45.109 ************************************ 00:14:45.109 20:21:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:45.109 * Looking for test storage... 00:14:45.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:45.109 20:21:59 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:45.109 20:21:59 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:45.109 20:21:59 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:45.109 20:21:59 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:45.109 20:21:59 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:45.109 20:21:59 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:45.109 20:21:59 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.109 20:21:59 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:45.109 20:21:59 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:45.109 20:21:59 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.109 20:21:59 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.109 20:21:59 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:45.109 20:21:59 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:45.109 20:21:59 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:45.109 20:21:59 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:45.109 20:21:59 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:45.109 20:21:59 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:45.109 20:21:59 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.109 20:21:59 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.109 20:21:59 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:45.109 20:21:59 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:45.109 20:21:59 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:45.109 20:21:59 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:45.109 20:21:59 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:45.109 20:21:59 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:45.109 20:21:59 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:45.109 20:21:59 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:45.109 20:21:59 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.109 20:21:59 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.109 20:21:59 -- ftl/fio.sh@11 -- # declare -A suite 00:14:45.109 20:21:59 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:45.109 20:21:59 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:45.109 20:21:59 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:45.109 20:21:59 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.109 20:21:59 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:45.109 20:21:59 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:45.109 20:21:59 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:45.109 20:21:59 -- ftl/fio.sh@26 -- # uuid= 00:14:45.109 20:21:59 -- ftl/fio.sh@27 -- # timeout=240 00:14:45.109 20:21:59 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:45.109 20:21:59 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:45.109 20:21:59 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:45.109 20:21:59 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:45.109 20:21:59 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:45.109 20:21:59 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:45.109 20:21:59 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:45.109 20:21:59 -- ftl/fio.sh@45 -- # svcpid=70685 00:14:45.109 20:21:59 -- ftl/fio.sh@46 -- # waitforlisten 70685 00:14:45.109 20:21:59 -- common/autotest_common.sh@819 -- # '[' -z 70685 ']' 00:14:45.109 20:21:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.109 20:21:59 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:45.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.109 20:21:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:45.109 20:21:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.109 20:21:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:45.109 20:21:59 -- common/autotest_common.sh@10 -- # set +x 00:14:45.109 [2024-10-16 20:21:59.949411] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:45.109 [2024-10-16 20:21:59.949683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70685 ] 00:14:45.371 [2024-10-16 20:22:00.098216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:45.371 [2024-10-16 20:22:00.284867] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:45.371 [2024-10-16 20:22:00.285370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.371 [2024-10-16 20:22:00.285719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.371 [2024-10-16 20:22:00.285824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.759 20:22:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:46.759 20:22:01 -- common/autotest_common.sh@852 -- # return 0 00:14:46.759 20:22:01 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:46.759 20:22:01 -- ftl/common.sh@54 -- # local name=nvme0 00:14:46.759 20:22:01 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:46.759 20:22:01 -- ftl/common.sh@56 -- # local size=103424 00:14:46.759 20:22:01 -- ftl/common.sh@59 -- # local base_bdev 00:14:46.759 20:22:01 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:47.020 20:22:01 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:47.020 20:22:01 -- ftl/common.sh@62 -- # local base_size 00:14:47.020 20:22:01 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:47.020 20:22:01 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:14:47.020 20:22:01 -- common/autotest_common.sh@1358 -- # local bdev_info 00:14:47.020 20:22:01 -- common/autotest_common.sh@1359 -- # local bs 00:14:47.020 20:22:01 -- common/autotest_common.sh@1360 -- # local nb 00:14:47.020 20:22:01 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:47.020 20:22:01 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:14:47.020 { 00:14:47.020 "name": "nvme0n1", 00:14:47.020 "aliases": [ 00:14:47.020 "acfde5ce-85a6-4c26-84b1-6ff40a37d4fd" 00:14:47.020 ], 00:14:47.020 "product_name": "NVMe disk", 00:14:47.020 "block_size": 4096, 00:14:47.020 "num_blocks": 1310720, 00:14:47.020 "uuid": "acfde5ce-85a6-4c26-84b1-6ff40a37d4fd", 00:14:47.020 "assigned_rate_limits": { 00:14:47.020 "rw_ios_per_sec": 0, 00:14:47.020 "rw_mbytes_per_sec": 0, 00:14:47.020 "r_mbytes_per_sec": 0, 00:14:47.020 "w_mbytes_per_sec": 0 00:14:47.020 }, 00:14:47.020 "claimed": false, 00:14:47.020 "zoned": false, 00:14:47.020 "supported_io_types": { 00:14:47.020 "read": true, 00:14:47.020 "write": true, 00:14:47.020 "unmap": true, 00:14:47.020 "write_zeroes": true, 00:14:47.020 "flush": true, 00:14:47.020 "reset": true, 00:14:47.020 "compare": true, 00:14:47.020 "compare_and_write": false, 00:14:47.020 "abort": true, 00:14:47.020 "nvme_admin": true, 00:14:47.020 "nvme_io": true 00:14:47.020 }, 00:14:47.020 "driver_specific": { 00:14:47.020 "nvme": [ 00:14:47.020 { 00:14:47.020 "pci_address": "0000:00:07.0", 00:14:47.020 "trid": { 00:14:47.020 "trtype": "PCIe", 00:14:47.020 "traddr": "0000:00:07.0" 00:14:47.020 }, 00:14:47.020 "ctrlr_data": { 00:14:47.020 "cntlid": 0, 00:14:47.020 "vendor_id": "0x1b36", 00:14:47.020 "model_number": "QEMU NVMe Ctrl", 00:14:47.020 "serial_number": "12341", 00:14:47.020 "firmware_revision": "8.0.0", 00:14:47.020 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:47.020 "oacs": { 00:14:47.020 "security": 0, 00:14:47.020 "format": 1, 00:14:47.020 "firmware": 0, 00:14:47.020 "ns_manage": 1 00:14:47.020 }, 00:14:47.020 "multi_ctrlr": false, 00:14:47.020 "ana_reporting": false 00:14:47.020 }, 00:14:47.020 "vs": { 00:14:47.020 "nvme_version": "1.4" 00:14:47.020 }, 00:14:47.020 "ns_data": { 00:14:47.020 "id": 1, 00:14:47.020 "can_share": false 00:14:47.020 } 00:14:47.020 } 00:14:47.020 ], 00:14:47.020 "mp_policy": "active_passive" 00:14:47.020 } 00:14:47.020 } 00:14:47.020 ]' 00:14:47.020 20:22:01 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:14:47.281 20:22:01 -- common/autotest_common.sh@1362 -- # bs=4096 00:14:47.281 20:22:01 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:14:47.281 20:22:01 -- common/autotest_common.sh@1363 -- # nb=1310720 00:14:47.281 20:22:01 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:14:47.281 20:22:01 -- common/autotest_common.sh@1367 -- # echo 5120 00:14:47.281 20:22:01 -- ftl/common.sh@63 -- # base_size=5120 00:14:47.281 20:22:01 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:47.281 20:22:01 -- ftl/common.sh@67 -- # clear_lvols 00:14:47.281 20:22:01 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:47.281 20:22:01 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:47.281 20:22:02 -- ftl/common.sh@28 -- # stores= 00:14:47.281 20:22:02 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:47.542 20:22:02 -- ftl/common.sh@68 -- # lvs=f56c36c5-baf0-4a83-8784-3e36111b3e4c 00:14:47.542 20:22:02 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f56c36c5-baf0-4a83-8784-3e36111b3e4c 00:14:47.802 20:22:02 -- ftl/fio.sh@48 -- # split_bdev=d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:47.802 20:22:02 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:47.802 20:22:02 -- ftl/common.sh@35 -- # local name=nvc0 00:14:47.802 20:22:02 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:47.802 20:22:02 -- ftl/common.sh@37 -- # local base_bdev=d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:47.802 20:22:02 -- ftl/common.sh@38 -- # local cache_size= 00:14:47.802 20:22:02 -- ftl/common.sh@41 -- # get_bdev_size d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:47.802 20:22:02 -- common/autotest_common.sh@1357 -- # local bdev_name=d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:47.802 20:22:02 -- common/autotest_common.sh@1358 -- # local bdev_info 00:14:47.802 20:22:02 -- common/autotest_common.sh@1359 -- # local bs 00:14:47.802 20:22:02 -- common/autotest_common.sh@1360 -- # local nb 00:14:47.802 20:22:02 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:48.063 20:22:02 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:14:48.063 { 00:14:48.063 "name": "d5a0332f-2bfb-4da5-94f0-63a50291e7ff", 00:14:48.063 "aliases": [ 00:14:48.063 "lvs/nvme0n1p0" 00:14:48.063 ], 00:14:48.063 "product_name": "Logical Volume", 00:14:48.063 "block_size": 4096, 00:14:48.063 "num_blocks": 26476544, 00:14:48.063 "uuid": "d5a0332f-2bfb-4da5-94f0-63a50291e7ff", 00:14:48.063 "assigned_rate_limits": { 00:14:48.063 "rw_ios_per_sec": 0, 00:14:48.063 "rw_mbytes_per_sec": 0, 00:14:48.063 "r_mbytes_per_sec": 0, 00:14:48.063 "w_mbytes_per_sec": 0 00:14:48.063 }, 00:14:48.063 "claimed": false, 00:14:48.063 "zoned": false, 00:14:48.063 "supported_io_types": { 00:14:48.063 "read": true, 00:14:48.063 "write": true, 00:14:48.063 "unmap": true, 00:14:48.063 "write_zeroes": true, 00:14:48.063 "flush": false, 00:14:48.063 "reset": true, 00:14:48.063 "compare": false, 00:14:48.063 "compare_and_write": false, 00:14:48.063 "abort": false, 00:14:48.063 "nvme_admin": false, 00:14:48.063 "nvme_io": false 00:14:48.063 }, 00:14:48.063 "driver_specific": { 00:14:48.063 "lvol": { 00:14:48.063 "lvol_store_uuid": "f56c36c5-baf0-4a83-8784-3e36111b3e4c", 00:14:48.063 "base_bdev": "nvme0n1", 00:14:48.063 "thin_provision": true, 00:14:48.063 "snapshot": false, 00:14:48.063 "clone": false, 00:14:48.063 "esnap_clone": false 00:14:48.063 } 00:14:48.063 } 00:14:48.063 } 00:14:48.063 ]' 00:14:48.063 20:22:02 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:14:48.063 20:22:02 -- common/autotest_common.sh@1362 -- # bs=4096 00:14:48.063 20:22:02 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:14:48.063 20:22:02 -- common/autotest_common.sh@1363 -- # nb=26476544 00:14:48.063 20:22:02 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:14:48.063 20:22:02 -- common/autotest_common.sh@1367 -- # echo 103424 00:14:48.063 20:22:02 -- ftl/common.sh@41 -- # local base_size=5171 00:14:48.063 20:22:02 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:48.064 20:22:02 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:48.324 20:22:03 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:48.324 20:22:03 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:48.324 20:22:03 -- ftl/common.sh@48 -- # get_bdev_size d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:48.324 20:22:03 -- common/autotest_common.sh@1357 -- # local bdev_name=d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:48.324 20:22:03 -- common/autotest_common.sh@1358 -- # local bdev_info 00:14:48.324 20:22:03 -- common/autotest_common.sh@1359 -- # local bs 00:14:48.324 20:22:03 -- common/autotest_common.sh@1360 -- # local nb 00:14:48.324 20:22:03 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:48.585 20:22:03 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:14:48.585 { 00:14:48.585 "name": "d5a0332f-2bfb-4da5-94f0-63a50291e7ff", 00:14:48.585 "aliases": [ 00:14:48.585 "lvs/nvme0n1p0" 00:14:48.585 ], 00:14:48.585 "product_name": "Logical Volume", 00:14:48.585 "block_size": 4096, 00:14:48.585 "num_blocks": 26476544, 00:14:48.585 "uuid": "d5a0332f-2bfb-4da5-94f0-63a50291e7ff", 00:14:48.585 "assigned_rate_limits": { 00:14:48.585 "rw_ios_per_sec": 0, 00:14:48.585 "rw_mbytes_per_sec": 0, 00:14:48.585 "r_mbytes_per_sec": 0, 00:14:48.585 "w_mbytes_per_sec": 0 00:14:48.585 }, 00:14:48.585 "claimed": false, 00:14:48.585 "zoned": false, 00:14:48.585 "supported_io_types": { 00:14:48.585 "read": true, 00:14:48.585 "write": true, 00:14:48.585 "unmap": true, 00:14:48.585 "write_zeroes": true, 00:14:48.585 "flush": false, 00:14:48.585 "reset": true, 00:14:48.585 "compare": false, 00:14:48.586 "compare_and_write": false, 00:14:48.586 "abort": false, 00:14:48.586 "nvme_admin": false, 00:14:48.586 "nvme_io": false 00:14:48.586 }, 00:14:48.586 "driver_specific": { 00:14:48.586 "lvol": { 00:14:48.586 "lvol_store_uuid": "f56c36c5-baf0-4a83-8784-3e36111b3e4c", 00:14:48.586 "base_bdev": "nvme0n1", 00:14:48.586 "thin_provision": true, 00:14:48.586 "snapshot": false, 00:14:48.586 "clone": false, 00:14:48.586 "esnap_clone": false 00:14:48.586 } 00:14:48.586 } 00:14:48.586 } 00:14:48.586 ]' 00:14:48.586 20:22:03 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:14:48.586 20:22:03 -- common/autotest_common.sh@1362 -- # bs=4096 00:14:48.586 20:22:03 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:14:48.586 20:22:03 -- common/autotest_common.sh@1363 -- # nb=26476544 00:14:48.586 20:22:03 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:14:48.586 20:22:03 -- common/autotest_common.sh@1367 -- # echo 103424 00:14:48.586 20:22:03 -- ftl/common.sh@48 -- # cache_size=5171 00:14:48.586 20:22:03 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:48.586 20:22:03 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:48.586 20:22:03 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:48.586 20:22:03 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:48.586 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:48.586 20:22:03 -- ftl/fio.sh@56 -- # get_bdev_size d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:48.586 20:22:03 -- common/autotest_common.sh@1357 -- # local bdev_name=d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:48.586 20:22:03 -- common/autotest_common.sh@1358 -- # local bdev_info 00:14:48.586 20:22:03 -- common/autotest_common.sh@1359 -- # local bs 00:14:48.586 20:22:03 -- common/autotest_common.sh@1360 -- # local nb 00:14:48.586 20:22:03 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d5a0332f-2bfb-4da5-94f0-63a50291e7ff 00:14:48.846 20:22:03 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:14:48.846 { 00:14:48.846 "name": "d5a0332f-2bfb-4da5-94f0-63a50291e7ff", 00:14:48.846 "aliases": [ 00:14:48.846 "lvs/nvme0n1p0" 00:14:48.846 ], 00:14:48.846 "product_name": "Logical Volume", 00:14:48.846 "block_size": 4096, 00:14:48.846 "num_blocks": 26476544, 00:14:48.846 "uuid": "d5a0332f-2bfb-4da5-94f0-63a50291e7ff", 00:14:48.846 "assigned_rate_limits": { 00:14:48.846 "rw_ios_per_sec": 0, 00:14:48.846 "rw_mbytes_per_sec": 0, 00:14:48.846 "r_mbytes_per_sec": 0, 00:14:48.846 "w_mbytes_per_sec": 0 00:14:48.846 }, 00:14:48.846 "claimed": false, 00:14:48.846 "zoned": false, 00:14:48.846 "supported_io_types": { 00:14:48.846 "read": true, 00:14:48.846 "write": true, 00:14:48.846 "unmap": true, 00:14:48.846 "write_zeroes": true, 00:14:48.846 "flush": false, 00:14:48.846 "reset": true, 00:14:48.846 "compare": false, 00:14:48.846 "compare_and_write": false, 00:14:48.846 "abort": false, 00:14:48.846 "nvme_admin": false, 00:14:48.846 "nvme_io": false 00:14:48.846 }, 00:14:48.846 "driver_specific": { 00:14:48.846 "lvol": { 00:14:48.846 "lvol_store_uuid": "f56c36c5-baf0-4a83-8784-3e36111b3e4c", 00:14:48.846 "base_bdev": "nvme0n1", 00:14:48.846 "thin_provision": true, 00:14:48.846 "snapshot": false, 00:14:48.846 "clone": false, 00:14:48.846 "esnap_clone": false 00:14:48.846 } 00:14:48.846 } 00:14:48.846 } 00:14:48.846 ]' 00:14:48.846 20:22:03 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:14:48.846 20:22:03 -- common/autotest_common.sh@1362 -- # bs=4096 00:14:48.846 20:22:03 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:14:48.846 20:22:03 -- common/autotest_common.sh@1363 -- # nb=26476544 00:14:48.846 20:22:03 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:14:48.846 20:22:03 -- common/autotest_common.sh@1367 -- # echo 103424 00:14:48.846 20:22:03 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:48.846 20:22:03 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:48.846 20:22:03 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d5a0332f-2bfb-4da5-94f0-63a50291e7ff -c nvc0n1p0 --l2p_dram_limit 60 00:14:49.108 [2024-10-16 20:22:03.931810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.931861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:49.108 [2024-10-16 20:22:03.931878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:14:49.108 [2024-10-16 20:22:03.931887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.931959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.931969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:49.108 [2024-10-16 20:22:03.931979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:14:49.108 [2024-10-16 20:22:03.931987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.932018] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:49.108 [2024-10-16 20:22:03.932751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:49.108 [2024-10-16 20:22:03.932773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.932781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:49.108 [2024-10-16 20:22:03.932792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:14:49.108 [2024-10-16 20:22:03.932800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.932869] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 62700e03-ed59-46f2-a9bc-82be14a078b3 00:14:49.108 [2024-10-16 20:22:03.934278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.934316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:49.108 [2024-10-16 20:22:03.934327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:14:49.108 [2024-10-16 20:22:03.934338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.941452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.941627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:49.108 [2024-10-16 20:22:03.941642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.039 ms 00:14:49.108 [2024-10-16 20:22:03.941653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.941741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.941753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:49.108 [2024-10-16 20:22:03.941761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:14:49.108 [2024-10-16 20:22:03.941773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.941822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.941833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:49.108 [2024-10-16 20:22:03.941841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:49.108 [2024-10-16 20:22:03.941852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.941882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:49.108 [2024-10-16 20:22:03.945886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.945916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:49.108 [2024-10-16 20:22:03.945927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.009 ms 00:14:49.108 [2024-10-16 20:22:03.945935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.945982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.945991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:49.108 [2024-10-16 20:22:03.946001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:14:49.108 [2024-10-16 20:22:03.946008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.946080] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:49.108 [2024-10-16 20:22:03.946200] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:49.108 [2024-10-16 20:22:03.946217] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:49.108 [2024-10-16 20:22:03.946228] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:49.108 [2024-10-16 20:22:03.946240] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:49.108 [2024-10-16 20:22:03.946250] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:49.108 [2024-10-16 20:22:03.946260] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:49.108 [2024-10-16 20:22:03.946268] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:49.108 [2024-10-16 20:22:03.946281] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:49.108 [2024-10-16 20:22:03.946289] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:49.108 [2024-10-16 20:22:03.946298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.946306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:49.108 [2024-10-16 20:22:03.946316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:14:49.108 [2024-10-16 20:22:03.946324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.946396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.108 [2024-10-16 20:22:03.946405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:49.108 [2024-10-16 20:22:03.946414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:14:49.108 [2024-10-16 20:22:03.946422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.108 [2024-10-16 20:22:03.946519] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:49.108 [2024-10-16 20:22:03.946529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:49.109 [2024-10-16 20:22:03.946539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:49.109 [2024-10-16 20:22:03.946547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946558] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:49.109 [2024-10-16 20:22:03.946565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:49.109 [2024-10-16 20:22:03.946581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:49.109 [2024-10-16 20:22:03.946591] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:49.109 [2024-10-16 20:22:03.946607] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:49.109 [2024-10-16 20:22:03.946614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:49.109 [2024-10-16 20:22:03.946624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:49.109 [2024-10-16 20:22:03.946631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:49.109 [2024-10-16 20:22:03.946640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:49.109 [2024-10-16 20:22:03.946647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:49.109 [2024-10-16 20:22:03.946664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:49.109 [2024-10-16 20:22:03.946672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:49.109 [2024-10-16 20:22:03.946688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:49.109 [2024-10-16 20:22:03.946695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:49.109 [2024-10-16 20:22:03.946703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:49.109 [2024-10-16 20:22:03.946711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:49.109 [2024-10-16 20:22:03.946730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:49.109 [2024-10-16 20:22:03.946738] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:49.109 [2024-10-16 20:22:03.946755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:49.109 [2024-10-16 20:22:03.946762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:49.109 [2024-10-16 20:22:03.946776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:49.109 [2024-10-16 20:22:03.946787] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:49.109 [2024-10-16 20:22:03.946817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:49.109 [2024-10-16 20:22:03.946824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:49.109 [2024-10-16 20:22:03.946840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:49.109 [2024-10-16 20:22:03.946849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:49.109 [2024-10-16 20:22:03.946855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:49.109 [2024-10-16 20:22:03.946863] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:49.109 [2024-10-16 20:22:03.946870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:49.109 [2024-10-16 20:22:03.946879] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:49.109 [2024-10-16 20:22:03.946888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:49.109 [2024-10-16 20:22:03.946897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:49.109 [2024-10-16 20:22:03.946905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:49.109 [2024-10-16 20:22:03.946913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:49.109 [2024-10-16 20:22:03.946921] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:49.109 [2024-10-16 20:22:03.946932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:49.109 [2024-10-16 20:22:03.946938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:49.109 [2024-10-16 20:22:03.946949] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:49.109 [2024-10-16 20:22:03.946959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:49.109 [2024-10-16 20:22:03.946973] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:49.109 [2024-10-16 20:22:03.946980] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:49.109 [2024-10-16 20:22:03.946990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:49.109 [2024-10-16 20:22:03.946997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:49.109 [2024-10-16 20:22:03.947008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:49.109 [2024-10-16 20:22:03.947015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:49.109 [2024-10-16 20:22:03.947024] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:49.109 [2024-10-16 20:22:03.947031] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:49.109 [2024-10-16 20:22:03.947062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:49.109 [2024-10-16 20:22:03.947070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:49.109 [2024-10-16 20:22:03.947080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:49.109 [2024-10-16 20:22:03.947088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:49.109 [2024-10-16 20:22:03.947101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:49.109 [2024-10-16 20:22:03.947107] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:49.109 [2024-10-16 20:22:03.947118] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:49.109 [2024-10-16 20:22:03.947129] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:49.109 [2024-10-16 20:22:03.947139] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:49.109 [2024-10-16 20:22:03.947147] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:49.109 [2024-10-16 20:22:03.947156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:49.109 [2024-10-16 20:22:03.947165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.109 [2024-10-16 20:22:03.947174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:49.109 [2024-10-16 20:22:03.947182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:14:49.109 [2024-10-16 20:22:03.947191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.109 [2024-10-16 20:22:03.963763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.109 [2024-10-16 20:22:03.963805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:49.109 [2024-10-16 20:22:03.963817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.505 ms 00:14:49.109 [2024-10-16 20:22:03.963827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.109 [2024-10-16 20:22:03.963920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.109 [2024-10-16 20:22:03.963933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:49.109 [2024-10-16 20:22:03.963943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:14:49.109 [2024-10-16 20:22:03.963953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.109 [2024-10-16 20:22:03.998689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.109 [2024-10-16 20:22:03.998724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:49.109 [2024-10-16 20:22:03.998733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.684 ms 00:14:49.109 [2024-10-16 20:22:03.998743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.109 [2024-10-16 20:22:03.998781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.109 [2024-10-16 20:22:03.998791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:49.109 [2024-10-16 20:22:03.998799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:49.109 [2024-10-16 20:22:03.998808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.109 [2024-10-16 20:22:03.999267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.109 [2024-10-16 20:22:03.999294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:49.109 [2024-10-16 20:22:03.999303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:14:49.109 [2024-10-16 20:22:03.999313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.109 [2024-10-16 20:22:03.999443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.109 [2024-10-16 20:22:03.999457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:49.110 [2024-10-16 20:22:03.999465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:14:49.110 [2024-10-16 20:22:03.999474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.110 [2024-10-16 20:22:04.027404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.110 [2024-10-16 20:22:04.027458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:49.110 [2024-10-16 20:22:04.027477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.897 ms 00:14:49.110 [2024-10-16 20:22:04.027493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.370 [2024-10-16 20:22:04.041487] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:49.370 [2024-10-16 20:22:04.058821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.370 [2024-10-16 20:22:04.058857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:49.370 [2024-10-16 20:22:04.058871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.172 ms 00:14:49.370 [2024-10-16 20:22:04.058879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.370 [2024-10-16 20:22:04.121736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.370 [2024-10-16 20:22:04.121782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:49.370 [2024-10-16 20:22:04.121796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.814 ms 00:14:49.370 [2024-10-16 20:22:04.121805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.370 [2024-10-16 20:22:04.121861] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:49.370 [2024-10-16 20:22:04.121873] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:52.670 [2024-10-16 20:22:07.509146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.670 [2024-10-16 20:22:07.509213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:52.670 [2024-10-16 20:22:07.509232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3387.274 ms 00:14:52.670 [2024-10-16 20:22:07.509241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.670 [2024-10-16 20:22:07.509450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.670 [2024-10-16 20:22:07.509463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:52.670 [2024-10-16 20:22:07.509476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:14:52.670 [2024-10-16 20:22:07.509484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.670 [2024-10-16 20:22:07.533401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.670 [2024-10-16 20:22:07.533435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:52.670 [2024-10-16 20:22:07.533450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.863 ms 00:14:52.670 [2024-10-16 20:22:07.533458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.670 [2024-10-16 20:22:07.555996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.670 [2024-10-16 20:22:07.556034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:52.670 [2024-10-16 20:22:07.556063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.498 ms 00:14:52.670 [2024-10-16 20:22:07.556071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.670 [2024-10-16 20:22:07.556405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.670 [2024-10-16 20:22:07.556428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:52.670 [2024-10-16 20:22:07.556440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:14:52.670 [2024-10-16 20:22:07.556448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.930 [2024-10-16 20:22:07.618015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.930 [2024-10-16 20:22:07.618061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:52.930 [2024-10-16 20:22:07.618075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.511 ms 00:14:52.930 [2024-10-16 20:22:07.618082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.930 [2024-10-16 20:22:07.642609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.930 [2024-10-16 20:22:07.642638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:52.930 [2024-10-16 20:22:07.642654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.482 ms 00:14:52.930 [2024-10-16 20:22:07.642662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.930 [2024-10-16 20:22:07.647114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.930 [2024-10-16 20:22:07.647147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:52.930 [2024-10-16 20:22:07.647160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.403 ms 00:14:52.931 [2024-10-16 20:22:07.647168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.931 [2024-10-16 20:22:07.670629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.931 [2024-10-16 20:22:07.670795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:52.931 [2024-10-16 20:22:07.670815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.416 ms 00:14:52.931 [2024-10-16 20:22:07.670823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.931 [2024-10-16 20:22:07.670881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.931 [2024-10-16 20:22:07.670890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:52.931 [2024-10-16 20:22:07.670901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:14:52.931 [2024-10-16 20:22:07.670908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.931 [2024-10-16 20:22:07.671005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.931 [2024-10-16 20:22:07.671015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:52.931 [2024-10-16 20:22:07.671027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:14:52.931 [2024-10-16 20:22:07.671034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.931 [2024-10-16 20:22:07.672023] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3739.753 ms, result 0 00:14:52.931 { 00:14:52.931 "name": "ftl0", 00:14:52.931 "uuid": "62700e03-ed59-46f2-a9bc-82be14a078b3" 00:14:52.931 } 00:14:52.931 20:22:07 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:52.931 20:22:07 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:14:52.931 20:22:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:52.931 20:22:07 -- common/autotest_common.sh@889 -- # local i 00:14:52.931 20:22:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:52.931 20:22:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:52.931 20:22:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:53.191 20:22:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:53.191 [ 00:14:53.191 { 00:14:53.191 "name": "ftl0", 00:14:53.191 "aliases": [ 00:14:53.191 "62700e03-ed59-46f2-a9bc-82be14a078b3" 00:14:53.191 ], 00:14:53.191 "product_name": "FTL disk", 00:14:53.191 "block_size": 4096, 00:14:53.191 "num_blocks": 20971520, 00:14:53.191 "uuid": "62700e03-ed59-46f2-a9bc-82be14a078b3", 00:14:53.191 "assigned_rate_limits": { 00:14:53.191 "rw_ios_per_sec": 0, 00:14:53.191 "rw_mbytes_per_sec": 0, 00:14:53.191 "r_mbytes_per_sec": 0, 00:14:53.191 "w_mbytes_per_sec": 0 00:14:53.191 }, 00:14:53.191 "claimed": false, 00:14:53.191 "zoned": false, 00:14:53.191 "supported_io_types": { 00:14:53.191 "read": true, 00:14:53.191 "write": true, 00:14:53.191 "unmap": true, 00:14:53.191 "write_zeroes": true, 00:14:53.191 "flush": true, 00:14:53.191 "reset": false, 00:14:53.191 "compare": false, 00:14:53.191 "compare_and_write": false, 00:14:53.191 "abort": false, 00:14:53.191 "nvme_admin": false, 00:14:53.191 "nvme_io": false 00:14:53.191 }, 00:14:53.191 "driver_specific": { 00:14:53.191 "ftl": { 00:14:53.191 "base_bdev": "d5a0332f-2bfb-4da5-94f0-63a50291e7ff", 00:14:53.191 "cache": "nvc0n1p0" 00:14:53.191 } 00:14:53.191 } 00:14:53.191 } 00:14:53.191 ] 00:14:53.191 20:22:08 -- common/autotest_common.sh@895 -- # return 0 00:14:53.191 20:22:08 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:53.191 20:22:08 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:53.449 20:22:08 -- ftl/fio.sh@70 -- # echo ']}' 00:14:53.449 20:22:08 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:53.708 [2024-10-16 20:22:08.412650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.412684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:53.708 [2024-10-16 20:22:08.412692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:53.708 [2024-10-16 20:22:08.412701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.412726] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:53.708 [2024-10-16 20:22:08.414967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.414990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:53.708 [2024-10-16 20:22:08.415002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:14:53.708 [2024-10-16 20:22:08.415008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.415411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.415429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:53.708 [2024-10-16 20:22:08.415438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:14:53.708 [2024-10-16 20:22:08.415444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.417904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.418023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:53.708 [2024-10-16 20:22:08.418038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.435 ms 00:14:53.708 [2024-10-16 20:22:08.418058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.422754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.422777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:53.708 [2024-10-16 20:22:08.422786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.660 ms 00:14:53.708 [2024-10-16 20:22:08.422793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.440318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.440418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:53.708 [2024-10-16 20:22:08.440433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.439 ms 00:14:53.708 [2024-10-16 20:22:08.440439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.452566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.452594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:53.708 [2024-10-16 20:22:08.452617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.093 ms 00:14:53.708 [2024-10-16 20:22:08.452625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.452777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.452791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:53.708 [2024-10-16 20:22:08.452803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:14:53.708 [2024-10-16 20:22:08.452809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.471024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.471059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:53.708 [2024-10-16 20:22:08.471069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.187 ms 00:14:53.708 [2024-10-16 20:22:08.471076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.488637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.488667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:53.708 [2024-10-16 20:22:08.488678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.524 ms 00:14:53.708 [2024-10-16 20:22:08.488684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.506067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.506090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:53.708 [2024-10-16 20:22:08.506100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.344 ms 00:14:53.708 [2024-10-16 20:22:08.506105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.523239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.523340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:53.708 [2024-10-16 20:22:08.523356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.053 ms 00:14:53.708 [2024-10-16 20:22:08.523361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.523394] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:53.708 [2024-10-16 20:22:08.523406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.523997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:53.708 [2024-10-16 20:22:08.524131] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:53.708 [2024-10-16 20:22:08.524139] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62700e03-ed59-46f2-a9bc-82be14a078b3 00:14:53.708 [2024-10-16 20:22:08.524145] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:53.708 [2024-10-16 20:22:08.524153] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:53.708 [2024-10-16 20:22:08.524158] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:53.708 [2024-10-16 20:22:08.524166] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:53.708 [2024-10-16 20:22:08.524171] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:53.708 [2024-10-16 20:22:08.524180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:53.708 [2024-10-16 20:22:08.524186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:53.708 [2024-10-16 20:22:08.524192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:53.708 [2024-10-16 20:22:08.524197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:53.708 [2024-10-16 20:22:08.524206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.524214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:53.708 [2024-10-16 20:22:08.524222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:14:53.708 [2024-10-16 20:22:08.524229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.708 [2024-10-16 20:22:08.534215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.708 [2024-10-16 20:22:08.534316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:53.709 [2024-10-16 20:22:08.534331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.948 ms 00:14:53.709 [2024-10-16 20:22:08.534337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.709 [2024-10-16 20:22:08.534504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.709 [2024-10-16 20:22:08.534512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:53.709 [2024-10-16 20:22:08.534520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:14:53.709 [2024-10-16 20:22:08.534525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.709 [2024-10-16 20:22:08.571099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.709 [2024-10-16 20:22:08.571125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:53.709 [2024-10-16 20:22:08.571135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.709 [2024-10-16 20:22:08.571141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.709 [2024-10-16 20:22:08.571200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.709 [2024-10-16 20:22:08.571206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:53.709 [2024-10-16 20:22:08.571214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.709 [2024-10-16 20:22:08.571220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.709 [2024-10-16 20:22:08.571299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.709 [2024-10-16 20:22:08.571308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:53.709 [2024-10-16 20:22:08.571317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.709 [2024-10-16 20:22:08.571323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.709 [2024-10-16 20:22:08.571347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.709 [2024-10-16 20:22:08.571356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:53.709 [2024-10-16 20:22:08.571364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.709 [2024-10-16 20:22:08.571370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.967 [2024-10-16 20:22:08.639921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.967 [2024-10-16 20:22:08.639958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:53.967 [2024-10-16 20:22:08.639969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.967 [2024-10-16 20:22:08.639976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.967 [2024-10-16 20:22:08.663852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.967 [2024-10-16 20:22:08.663879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:53.967 [2024-10-16 20:22:08.663889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.967 [2024-10-16 20:22:08.663896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.967 [2024-10-16 20:22:08.663962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.967 [2024-10-16 20:22:08.663970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:53.967 [2024-10-16 20:22:08.663979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.967 [2024-10-16 20:22:08.663985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.967 [2024-10-16 20:22:08.664040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.967 [2024-10-16 20:22:08.664229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:53.967 [2024-10-16 20:22:08.664251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.967 [2024-10-16 20:22:08.664266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.967 [2024-10-16 20:22:08.664386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.967 [2024-10-16 20:22:08.664407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:53.967 [2024-10-16 20:22:08.664425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.967 [2024-10-16 20:22:08.664471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.967 [2024-10-16 20:22:08.664527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.967 [2024-10-16 20:22:08.664534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:53.967 [2024-10-16 20:22:08.664542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.967 [2024-10-16 20:22:08.664551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.967 [2024-10-16 20:22:08.664597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.967 [2024-10-16 20:22:08.664603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:53.967 [2024-10-16 20:22:08.664611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.967 [2024-10-16 20:22:08.664618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.967 [2024-10-16 20:22:08.664671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.967 [2024-10-16 20:22:08.664678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:53.967 [2024-10-16 20:22:08.664688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.967 [2024-10-16 20:22:08.664694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.967 [2024-10-16 20:22:08.664839] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 252.150 ms, result 0 00:14:53.967 true 00:14:53.967 20:22:08 -- ftl/fio.sh@75 -- # killprocess 70685 00:14:53.967 20:22:08 -- common/autotest_common.sh@926 -- # '[' -z 70685 ']' 00:14:53.967 20:22:08 -- common/autotest_common.sh@930 -- # kill -0 70685 00:14:53.967 20:22:08 -- common/autotest_common.sh@931 -- # uname 00:14:53.967 20:22:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:53.967 20:22:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70685 00:14:53.967 killing process with pid 70685 00:14:53.967 20:22:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:53.967 20:22:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:53.967 20:22:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70685' 00:14:53.967 20:22:08 -- common/autotest_common.sh@945 -- # kill 70685 00:14:53.967 20:22:08 -- common/autotest_common.sh@950 -- # wait 70685 00:15:00.528 20:22:14 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:00.528 20:22:14 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:00.528 20:22:14 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:00.528 20:22:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:00.528 20:22:14 -- common/autotest_common.sh@10 -- # set +x 00:15:00.528 20:22:14 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:00.528 20:22:14 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:00.528 20:22:14 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:00.528 20:22:14 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:00.528 20:22:14 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:00.528 20:22:14 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:00.528 20:22:14 -- common/autotest_common.sh@1320 -- # shift 00:15:00.528 20:22:14 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:00.528 20:22:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:00.528 20:22:14 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:00.528 20:22:14 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:00.528 20:22:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:00.528 20:22:14 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:00.528 20:22:14 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:00.528 20:22:14 -- common/autotest_common.sh@1326 -- # break 00:15:00.528 20:22:14 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:00.528 20:22:14 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:00.528 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:00.528 fio-3.35 00:15:00.528 Starting 1 thread 00:15:05.815 00:15:05.815 test: (groupid=0, jobs=1): err= 0: pid=70918: Wed Oct 16 20:22:20 2024 00:15:05.815 read: IOPS=886, BW=58.8MiB/s (61.7MB/s)(255MiB/4326msec) 00:15:05.815 slat (nsec): min=2887, max=33723, avg=4381.17, stdev=2183.61 00:15:05.815 clat (usec): min=243, max=1803, avg=516.21, stdev=248.50 00:15:05.815 lat (usec): min=246, max=1808, avg=520.59, stdev=249.62 00:15:05.815 clat percentiles (usec): 00:15:05.815 | 1.00th=[ 269], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 310], 00:15:05.815 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 412], 60.00th=[ 465], 00:15:05.815 | 70.00th=[ 652], 80.00th=[ 832], 90.00th=[ 898], 95.00th=[ 922], 00:15:05.815 | 99.00th=[ 1090], 99.50th=[ 1205], 99.90th=[ 1467], 99.95th=[ 1713], 00:15:05.815 | 99.99th=[ 1811] 00:15:05.815 write: IOPS=892, BW=59.3MiB/s (62.1MB/s)(256MiB/4321msec); 0 zone resets 00:15:05.815 slat (nsec): min=13189, max=52273, avg=18095.73, stdev=3941.68 00:15:05.815 clat (usec): min=277, max=1913, avg=573.26, stdev=288.75 00:15:05.815 lat (usec): min=291, max=1941, avg=591.35, stdev=290.94 00:15:05.815 clat percentiles (usec): 00:15:05.815 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 334], 00:15:05.815 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 453], 60.00th=[ 578], 00:15:05.815 | 70.00th=[ 709], 80.00th=[ 914], 90.00th=[ 971], 95.00th=[ 1029], 00:15:05.815 | 99.00th=[ 1549], 99.50th=[ 1696], 99.90th=[ 1844], 99.95th=[ 1909], 00:15:05.815 | 99.99th=[ 1909] 00:15:05.815 bw ( KiB/s): min=35496, max=90576, per=100.00%, avg=60979.00, stdev=26421.52, samples=8 00:15:05.815 iops : min= 522, max= 1332, avg=896.75, stdev=388.55, samples=8 00:15:05.815 lat (usec) : 250=0.04%, 500=59.53%, 750=12.73%, 1000=23.87% 00:15:05.815 lat (msec) : 2=3.84% 00:15:05.815 cpu : usr=99.33%, sys=0.12%, ctx=13, majf=0, minf=1318 00:15:05.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.815 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.815 00:15:05.815 Run status group 0 (all jobs): 00:15:05.815 READ: bw=58.8MiB/s (61.7MB/s), 58.8MiB/s-58.8MiB/s (61.7MB/s-61.7MB/s), io=255MiB (267MB), run=4326-4326msec 00:15:05.815 WRITE: bw=59.3MiB/s (62.1MB/s), 59.3MiB/s-59.3MiB/s (62.1MB/s-62.1MB/s), io=256MiB (269MB), run=4321-4321msec 00:15:06.832 ----------------------------------------------------- 00:15:06.832 Suppressions used: 00:15:06.832 count bytes template 00:15:06.832 1 5 /usr/src/fio/parse.c 00:15:06.832 1 8 libtcmalloc_minimal.so 00:15:06.832 1 904 libcrypto.so 00:15:06.832 ----------------------------------------------------- 00:15:06.832 00:15:06.832 20:22:21 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:06.832 20:22:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:06.832 20:22:21 -- common/autotest_common.sh@10 -- # set +x 00:15:06.832 20:22:21 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:06.832 20:22:21 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:06.832 20:22:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:06.832 20:22:21 -- common/autotest_common.sh@10 -- # set +x 00:15:06.832 20:22:21 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:06.832 20:22:21 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:06.832 20:22:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:06.832 20:22:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:06.832 20:22:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:06.832 20:22:21 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.832 20:22:21 -- common/autotest_common.sh@1320 -- # shift 00:15:06.832 20:22:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:06.832 20:22:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:06.832 20:22:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.832 20:22:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:06.832 20:22:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:07.093 20:22:21 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:07.093 20:22:21 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:07.093 20:22:21 -- common/autotest_common.sh@1326 -- # break 00:15:07.093 20:22:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:07.093 20:22:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:07.093 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:07.093 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:07.093 fio-3.35 00:15:07.093 Starting 2 threads 00:15:33.676 00:15:33.676 first_half: (groupid=0, jobs=1): err= 0: pid=71026: Wed Oct 16 20:22:48 2024 00:15:33.676 read: IOPS=2607, BW=10.2MiB/s (10.7MB/s)(255MiB/25021msec) 00:15:33.676 slat (nsec): min=2971, max=33287, avg=5106.32, stdev=1062.76 00:15:33.676 clat (usec): min=618, max=371796, avg=38191.41, stdev=28351.09 00:15:33.676 lat (usec): min=623, max=371800, avg=38196.52, stdev=28351.15 00:15:33.676 clat percentiles (msec): 00:15:33.676 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 29], 00:15:33.676 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 33], 00:15:33.676 | 70.00th=[ 37], 80.00th=[ 42], 90.00th=[ 47], 95.00th=[ 60], 00:15:33.676 | 99.00th=[ 201], 99.50th=[ 251], 99.90th=[ 317], 99.95th=[ 330], 00:15:33.676 | 99.99th=[ 355] 00:15:33.676 write: IOPS=3304, BW=12.9MiB/s (13.5MB/s)(256MiB/19835msec); 0 zone resets 00:15:33.676 slat (usec): min=3, max=1251, avg= 6.50, stdev= 9.41 00:15:33.676 clat (usec): min=351, max=79002, avg=10803.37, stdev=14608.17 00:15:33.676 lat (usec): min=359, max=79019, avg=10809.87, stdev=14608.18 00:15:33.676 clat percentiles (usec): 00:15:33.676 | 1.00th=[ 685], 5.00th=[ 832], 10.00th=[ 988], 20.00th=[ 1287], 00:15:33.676 | 30.00th=[ 3687], 40.00th=[ 5211], 50.00th=[ 6652], 60.00th=[ 8455], 00:15:33.676 | 70.00th=[10421], 80.00th=[13304], 90.00th=[19792], 95.00th=[58459], 00:15:33.676 | 99.00th=[65274], 99.50th=[67634], 99.90th=[72877], 99.95th=[77071], 00:15:33.676 | 99.99th=[78119] 00:15:33.676 bw ( KiB/s): min= 1120, max=41072, per=89.60%, avg=22788.70, stdev=13101.78, samples=23 00:15:33.676 iops : min= 280, max=10268, avg=5697.09, stdev=3275.39, samples=23 00:15:33.676 lat (usec) : 500=0.02%, 750=1.34%, 1000=3.98% 00:15:33.676 lat (msec) : 2=7.16%, 4=3.51%, 10=18.30%, 20=12.03%, 50=46.34% 00:15:33.676 lat (msec) : 100=5.88%, 250=1.17%, 500=0.26% 00:15:33.676 cpu : usr=99.49%, sys=0.10%, ctx=32, majf=0, minf=5535 00:15:33.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:33.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.676 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.676 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.677 second_half: (groupid=0, jobs=1): err= 0: pid=71027: Wed Oct 16 20:22:48 2024 00:15:33.677 read: IOPS=2588, BW=10.1MiB/s (10.6MB/s)(255MiB/25211msec) 00:15:33.677 slat (usec): min=2, max=561, avg= 4.47, stdev= 2.70 00:15:33.677 clat (usec): min=627, max=405433, avg=38434.30, stdev=32935.82 00:15:33.677 lat (usec): min=631, max=405439, avg=38438.77, stdev=32936.10 00:15:33.677 clat percentiles (msec): 00:15:33.677 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 29], 00:15:33.677 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 33], 00:15:33.677 | 70.00th=[ 37], 80.00th=[ 41], 90.00th=[ 47], 95.00th=[ 62], 00:15:33.677 | 99.00th=[ 226], 99.50th=[ 275], 99.90th=[ 342], 99.95th=[ 355], 00:15:33.677 | 99.99th=[ 401] 00:15:33.677 write: IOPS=3179, BW=12.4MiB/s (13.0MB/s)(256MiB/20614msec); 0 zone resets 00:15:33.677 slat (usec): min=3, max=667, avg= 5.82, stdev= 4.91 00:15:33.677 clat (usec): min=354, max=78731, avg=10951.98, stdev=15351.83 00:15:33.677 lat (usec): min=363, max=78736, avg=10957.80, stdev=15351.79 00:15:33.677 clat percentiles (usec): 00:15:33.677 | 1.00th=[ 644], 5.00th=[ 758], 10.00th=[ 898], 20.00th=[ 1254], 00:15:33.677 | 30.00th=[ 2999], 40.00th=[ 4424], 50.00th=[ 5407], 60.00th=[ 7046], 00:15:33.677 | 70.00th=[10028], 80.00th=[14222], 90.00th=[27132], 95.00th=[59507], 00:15:33.677 | 99.00th=[65799], 99.50th=[68682], 99.90th=[71828], 99.95th=[72877], 00:15:33.677 | 99.99th=[76022] 00:15:33.677 bw ( KiB/s): min= 5848, max=48072, per=93.69%, avg=23830.00, stdev=11397.18, samples=22 00:15:33.677 iops : min= 1462, max=12018, avg=5957.50, stdev=2849.30, samples=22 00:15:33.677 lat (usec) : 500=0.01%, 750=2.34%, 1000=4.36% 00:15:33.677 lat (msec) : 2=5.12%, 4=6.90%, 10=18.19%, 20=9.13%, 50=46.45% 00:15:33.677 lat (msec) : 100=5.86%, 250=1.29%, 500=0.36% 00:15:33.677 cpu : usr=99.08%, sys=0.21%, ctx=149, majf=0, minf=5582 00:15:33.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:33.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.677 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.677 issued rwts: total=65256,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.677 00:15:33.677 Run status group 0 (all jobs): 00:15:33.677 READ: bw=20.2MiB/s (21.2MB/s), 10.1MiB/s-10.2MiB/s (10.6MB/s-10.7MB/s), io=510MiB (535MB), run=25021-25211msec 00:15:33.677 WRITE: bw=24.8MiB/s (26.0MB/s), 12.4MiB/s-12.9MiB/s (13.0MB/s-13.5MB/s), io=512MiB (537MB), run=19835-20614msec 00:15:36.226 ----------------------------------------------------- 00:15:36.226 Suppressions used: 00:15:36.226 count bytes template 00:15:36.226 2 10 /usr/src/fio/parse.c 00:15:36.226 2 192 /usr/src/fio/iolog.c 00:15:36.226 1 8 libtcmalloc_minimal.so 00:15:36.226 1 904 libcrypto.so 00:15:36.226 ----------------------------------------------------- 00:15:36.226 00:15:36.226 20:22:50 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:36.226 20:22:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:36.226 20:22:50 -- common/autotest_common.sh@10 -- # set +x 00:15:36.226 20:22:50 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:36.226 20:22:50 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:36.226 20:22:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:36.226 20:22:50 -- common/autotest_common.sh@10 -- # set +x 00:15:36.226 20:22:50 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:36.226 20:22:50 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:36.226 20:22:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:36.226 20:22:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.226 20:22:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:36.226 20:22:50 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.226 20:22:50 -- common/autotest_common.sh@1320 -- # shift 00:15:36.226 20:22:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:36.226 20:22:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.227 20:22:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.227 20:22:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:36.227 20:22:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:36.227 20:22:50 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:36.227 20:22:50 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:36.227 20:22:50 -- common/autotest_common.sh@1326 -- # break 00:15:36.227 20:22:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:36.227 20:22:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:36.227 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:36.227 fio-3.35 00:15:36.227 Starting 1 thread 00:15:51.140 00:15:51.140 test: (groupid=0, jobs=1): err= 0: pid=71361: Wed Oct 16 20:23:05 2024 00:15:51.140 read: IOPS=7786, BW=30.4MiB/s (31.9MB/s)(255MiB/8374msec) 00:15:51.140 slat (nsec): min=2936, max=42710, avg=4106.26, stdev=1756.89 00:15:51.140 clat (usec): min=447, max=36963, avg=16432.17, stdev=3219.80 00:15:51.140 lat (usec): min=453, max=36972, avg=16436.28, stdev=3220.62 00:15:51.140 clat percentiles (usec): 00:15:51.140 | 1.00th=[13698], 5.00th=[13960], 10.00th=[14091], 20.00th=[14222], 00:15:51.140 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15664], 00:15:51.140 | 70.00th=[16712], 80.00th=[18220], 90.00th=[21103], 95.00th=[23200], 00:15:51.140 | 99.00th=[27657], 99.50th=[30016], 99.90th=[34866], 99.95th=[35914], 00:15:51.140 | 99.99th=[36439] 00:15:51.140 write: IOPS=14.3k, BW=55.9MiB/s (58.6MB/s)(256MiB/4582msec); 0 zone resets 00:15:51.140 slat (usec): min=3, max=716, avg= 5.53, stdev= 3.78 00:15:51.140 clat (usec): min=462, max=48703, avg=8893.91, stdev=10385.53 00:15:51.140 lat (usec): min=466, max=48708, avg=8899.44, stdev=10385.54 00:15:51.140 clat percentiles (usec): 00:15:51.140 | 1.00th=[ 652], 5.00th=[ 799], 10.00th=[ 914], 20.00th=[ 1074], 00:15:51.140 | 30.00th=[ 1237], 40.00th=[ 1745], 50.00th=[ 5735], 60.00th=[ 6849], 00:15:51.140 | 70.00th=[ 8717], 80.00th=[13829], 90.00th=[29492], 95.00th=[31065], 00:15:51.140 | 99.00th=[37487], 99.50th=[39584], 99.90th=[42206], 99.95th=[44827], 00:15:51.140 | 99.99th=[47449] 00:15:51.140 bw ( KiB/s): min= 7272, max=75440, per=91.64%, avg=52428.80, stdev=19396.07, samples=10 00:15:51.140 iops : min= 1818, max=18860, avg=13107.20, stdev=4849.02, samples=10 00:15:51.140 lat (usec) : 500=0.01%, 750=1.67%, 1000=5.74% 00:15:51.140 lat (msec) : 2=12.95%, 4=0.74%, 10=16.24%, 20=47.77%, 50=14.87% 00:15:51.140 cpu : usr=99.32%, sys=0.19%, ctx=27, majf=0, minf=5567 00:15:51.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:51.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.140 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.140 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.140 00:15:51.140 Run status group 0 (all jobs): 00:15:51.140 READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=255MiB (267MB), run=8374-8374msec 00:15:51.140 WRITE: bw=55.9MiB/s (58.6MB/s), 55.9MiB/s-55.9MiB/s (58.6MB/s-58.6MB/s), io=256MiB (268MB), run=4582-4582msec 00:15:52.086 ----------------------------------------------------- 00:15:52.086 Suppressions used: 00:15:52.086 count bytes template 00:15:52.086 1 5 /usr/src/fio/parse.c 00:15:52.086 2 192 /usr/src/fio/iolog.c 00:15:52.086 1 8 libtcmalloc_minimal.so 00:15:52.086 1 904 libcrypto.so 00:15:52.086 ----------------------------------------------------- 00:15:52.086 00:15:52.086 20:23:06 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:52.086 20:23:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:52.086 20:23:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.086 20:23:06 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:52.086 Remove shared memory files 00:15:52.086 20:23:06 -- ftl/fio.sh@85 -- # remove_shm 00:15:52.086 20:23:06 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:52.086 20:23:06 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:52.086 20:23:06 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:52.086 20:23:06 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56176 /dev/shm/spdk_tgt_trace.pid69618 00:15:52.086 20:23:06 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:52.086 20:23:06 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:52.086 ************************************ 00:15:52.086 END TEST ftl_fio_basic 00:15:52.086 ************************************ 00:15:52.086 00:15:52.086 real 1m7.038s 00:15:52.086 user 2m28.196s 00:15:52.086 sys 0m3.042s 00:15:52.086 20:23:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.086 20:23:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.086 20:23:06 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:52.086 20:23:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:52.086 20:23:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:52.086 20:23:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.086 ************************************ 00:15:52.086 START TEST ftl_bdevperf 00:15:52.086 ************************************ 00:15:52.086 20:23:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:52.086 * Looking for test storage... 00:15:52.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.086 20:23:06 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:52.086 20:23:06 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:52.086 20:23:06 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.086 20:23:06 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.086 20:23:06 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:52.086 20:23:06 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:52.086 20:23:06 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.086 20:23:06 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:52.086 20:23:06 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:52.086 20:23:06 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.086 20:23:06 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.086 20:23:06 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:52.086 20:23:06 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:52.086 20:23:06 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:52.086 20:23:06 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:52.086 20:23:06 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:52.086 20:23:06 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:52.086 20:23:06 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.086 20:23:06 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.086 20:23:06 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:52.086 20:23:06 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:52.086 20:23:06 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:52.086 20:23:06 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:52.086 20:23:06 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:52.086 20:23:06 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:52.087 20:23:06 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:52.087 20:23:06 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:52.087 20:23:06 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:52.087 20:23:06 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:52.087 20:23:06 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:52.087 20:23:06 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:52.087 20:23:06 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:52.087 20:23:06 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.087 20:23:06 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:52.087 20:23:06 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:52.087 20:23:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:52.087 20:23:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.087 20:23:06 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71587 00:15:52.087 20:23:06 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:52.087 20:23:06 -- ftl/bdevperf.sh@22 -- # waitforlisten 71587 00:15:52.087 20:23:06 -- common/autotest_common.sh@819 -- # '[' -z 71587 ']' 00:15:52.087 20:23:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.087 20:23:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:52.087 20:23:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.087 20:23:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:52.087 20:23:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.087 20:23:06 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:52.348 [2024-10-16 20:23:07.044764] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:52.348 [2024-10-16 20:23:07.044906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71587 ] 00:15:52.348 [2024-10-16 20:23:07.198748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.608 [2024-10-16 20:23:07.420386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.180 20:23:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:53.180 20:23:07 -- common/autotest_common.sh@852 -- # return 0 00:15:53.181 20:23:07 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:53.181 20:23:07 -- ftl/common.sh@54 -- # local name=nvme0 00:15:53.181 20:23:07 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:53.181 20:23:07 -- ftl/common.sh@56 -- # local size=103424 00:15:53.181 20:23:07 -- ftl/common.sh@59 -- # local base_bdev 00:15:53.181 20:23:07 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:53.442 20:23:08 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:53.442 20:23:08 -- ftl/common.sh@62 -- # local base_size 00:15:53.442 20:23:08 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:53.442 20:23:08 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:15:53.442 20:23:08 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:53.442 20:23:08 -- common/autotest_common.sh@1359 -- # local bs 00:15:53.442 20:23:08 -- common/autotest_common.sh@1360 -- # local nb 00:15:53.442 20:23:08 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:53.442 20:23:08 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:53.442 { 00:15:53.442 "name": "nvme0n1", 00:15:53.442 "aliases": [ 00:15:53.442 "d13610e5-8173-48c9-9668-d283666b0816" 00:15:53.442 ], 00:15:53.442 "product_name": "NVMe disk", 00:15:53.442 "block_size": 4096, 00:15:53.442 "num_blocks": 1310720, 00:15:53.442 "uuid": "d13610e5-8173-48c9-9668-d283666b0816", 00:15:53.442 "assigned_rate_limits": { 00:15:53.442 "rw_ios_per_sec": 0, 00:15:53.442 "rw_mbytes_per_sec": 0, 00:15:53.442 "r_mbytes_per_sec": 0, 00:15:53.442 "w_mbytes_per_sec": 0 00:15:53.442 }, 00:15:53.442 "claimed": true, 00:15:53.442 "claim_type": "read_many_write_one", 00:15:53.442 "zoned": false, 00:15:53.442 "supported_io_types": { 00:15:53.442 "read": true, 00:15:53.442 "write": true, 00:15:53.442 "unmap": true, 00:15:53.442 "write_zeroes": true, 00:15:53.442 "flush": true, 00:15:53.442 "reset": true, 00:15:53.442 "compare": true, 00:15:53.442 "compare_and_write": false, 00:15:53.442 "abort": true, 00:15:53.442 "nvme_admin": true, 00:15:53.442 "nvme_io": true 00:15:53.442 }, 00:15:53.442 "driver_specific": { 00:15:53.442 "nvme": [ 00:15:53.442 { 00:15:53.442 "pci_address": "0000:00:07.0", 00:15:53.442 "trid": { 00:15:53.442 "trtype": "PCIe", 00:15:53.442 "traddr": "0000:00:07.0" 00:15:53.442 }, 00:15:53.442 "ctrlr_data": { 00:15:53.442 "cntlid": 0, 00:15:53.442 "vendor_id": "0x1b36", 00:15:53.442 "model_number": "QEMU NVMe Ctrl", 00:15:53.442 "serial_number": "12341", 00:15:53.442 "firmware_revision": "8.0.0", 00:15:53.442 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:53.442 "oacs": { 00:15:53.442 "security": 0, 00:15:53.442 "format": 1, 00:15:53.442 "firmware": 0, 00:15:53.442 "ns_manage": 1 00:15:53.442 }, 00:15:53.442 "multi_ctrlr": false, 00:15:53.442 "ana_reporting": false 00:15:53.442 }, 00:15:53.442 "vs": { 00:15:53.442 "nvme_version": "1.4" 00:15:53.442 }, 00:15:53.442 "ns_data": { 00:15:53.442 "id": 1, 00:15:53.442 "can_share": false 00:15:53.442 } 00:15:53.442 } 00:15:53.442 ], 00:15:53.442 "mp_policy": "active_passive" 00:15:53.442 } 00:15:53.442 } 00:15:53.442 ]' 00:15:53.442 20:23:08 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:53.703 20:23:08 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:53.703 20:23:08 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:53.703 20:23:08 -- common/autotest_common.sh@1363 -- # nb=1310720 00:15:53.703 20:23:08 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:15:53.703 20:23:08 -- common/autotest_common.sh@1367 -- # echo 5120 00:15:53.703 20:23:08 -- ftl/common.sh@63 -- # base_size=5120 00:15:53.703 20:23:08 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:53.703 20:23:08 -- ftl/common.sh@67 -- # clear_lvols 00:15:53.703 20:23:08 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:53.703 20:23:08 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:53.703 20:23:08 -- ftl/common.sh@28 -- # stores=f56c36c5-baf0-4a83-8784-3e36111b3e4c 00:15:53.703 20:23:08 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:53.703 20:23:08 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f56c36c5-baf0-4a83-8784-3e36111b3e4c 00:15:53.964 20:23:08 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:54.226 20:23:09 -- ftl/common.sh@68 -- # lvs=c476d9c3-3fce-4994-801a-26086272b8e0 00:15:54.226 20:23:09 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c476d9c3-3fce-4994-801a-26086272b8e0 00:15:54.487 20:23:09 -- ftl/bdevperf.sh@23 -- # split_bdev=01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:54.487 20:23:09 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:54.487 20:23:09 -- ftl/common.sh@35 -- # local name=nvc0 00:15:54.487 20:23:09 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:54.487 20:23:09 -- ftl/common.sh@37 -- # local base_bdev=01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:54.487 20:23:09 -- ftl/common.sh@38 -- # local cache_size= 00:15:54.487 20:23:09 -- ftl/common.sh@41 -- # get_bdev_size 01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:54.487 20:23:09 -- common/autotest_common.sh@1357 -- # local bdev_name=01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:54.487 20:23:09 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:54.487 20:23:09 -- common/autotest_common.sh@1359 -- # local bs 00:15:54.487 20:23:09 -- common/autotest_common.sh@1360 -- # local nb 00:15:54.487 20:23:09 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:54.749 20:23:09 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:54.749 { 00:15:54.749 "name": "01d0a459-8deb-40bd-b31c-aa95a363d13e", 00:15:54.749 "aliases": [ 00:15:54.749 "lvs/nvme0n1p0" 00:15:54.749 ], 00:15:54.749 "product_name": "Logical Volume", 00:15:54.749 "block_size": 4096, 00:15:54.749 "num_blocks": 26476544, 00:15:54.749 "uuid": "01d0a459-8deb-40bd-b31c-aa95a363d13e", 00:15:54.749 "assigned_rate_limits": { 00:15:54.749 "rw_ios_per_sec": 0, 00:15:54.749 "rw_mbytes_per_sec": 0, 00:15:54.749 "r_mbytes_per_sec": 0, 00:15:54.749 "w_mbytes_per_sec": 0 00:15:54.749 }, 00:15:54.749 "claimed": false, 00:15:54.749 "zoned": false, 00:15:54.749 "supported_io_types": { 00:15:54.749 "read": true, 00:15:54.749 "write": true, 00:15:54.749 "unmap": true, 00:15:54.749 "write_zeroes": true, 00:15:54.749 "flush": false, 00:15:54.749 "reset": true, 00:15:54.749 "compare": false, 00:15:54.749 "compare_and_write": false, 00:15:54.749 "abort": false, 00:15:54.749 "nvme_admin": false, 00:15:54.749 "nvme_io": false 00:15:54.749 }, 00:15:54.749 "driver_specific": { 00:15:54.749 "lvol": { 00:15:54.749 "lvol_store_uuid": "c476d9c3-3fce-4994-801a-26086272b8e0", 00:15:54.749 "base_bdev": "nvme0n1", 00:15:54.749 "thin_provision": true, 00:15:54.749 "snapshot": false, 00:15:54.749 "clone": false, 00:15:54.749 "esnap_clone": false 00:15:54.749 } 00:15:54.749 } 00:15:54.749 } 00:15:54.749 ]' 00:15:54.749 20:23:09 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:54.749 20:23:09 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:54.749 20:23:09 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:54.749 20:23:09 -- common/autotest_common.sh@1363 -- # nb=26476544 00:15:54.749 20:23:09 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:15:54.749 20:23:09 -- common/autotest_common.sh@1367 -- # echo 103424 00:15:54.749 20:23:09 -- ftl/common.sh@41 -- # local base_size=5171 00:15:54.749 20:23:09 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:54.749 20:23:09 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:55.043 20:23:09 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:55.043 20:23:09 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:55.044 20:23:09 -- ftl/common.sh@48 -- # get_bdev_size 01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:55.044 20:23:09 -- common/autotest_common.sh@1357 -- # local bdev_name=01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:55.044 20:23:09 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:55.044 20:23:09 -- common/autotest_common.sh@1359 -- # local bs 00:15:55.044 20:23:09 -- common/autotest_common.sh@1360 -- # local nb 00:15:55.044 20:23:09 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:55.044 20:23:09 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:55.044 { 00:15:55.044 "name": "01d0a459-8deb-40bd-b31c-aa95a363d13e", 00:15:55.044 "aliases": [ 00:15:55.044 "lvs/nvme0n1p0" 00:15:55.044 ], 00:15:55.044 "product_name": "Logical Volume", 00:15:55.044 "block_size": 4096, 00:15:55.044 "num_blocks": 26476544, 00:15:55.044 "uuid": "01d0a459-8deb-40bd-b31c-aa95a363d13e", 00:15:55.044 "assigned_rate_limits": { 00:15:55.044 "rw_ios_per_sec": 0, 00:15:55.044 "rw_mbytes_per_sec": 0, 00:15:55.044 "r_mbytes_per_sec": 0, 00:15:55.044 "w_mbytes_per_sec": 0 00:15:55.044 }, 00:15:55.044 "claimed": false, 00:15:55.044 "zoned": false, 00:15:55.044 "supported_io_types": { 00:15:55.044 "read": true, 00:15:55.044 "write": true, 00:15:55.044 "unmap": true, 00:15:55.044 "write_zeroes": true, 00:15:55.044 "flush": false, 00:15:55.044 "reset": true, 00:15:55.044 "compare": false, 00:15:55.044 "compare_and_write": false, 00:15:55.044 "abort": false, 00:15:55.044 "nvme_admin": false, 00:15:55.044 "nvme_io": false 00:15:55.044 }, 00:15:55.044 "driver_specific": { 00:15:55.044 "lvol": { 00:15:55.044 "lvol_store_uuid": "c476d9c3-3fce-4994-801a-26086272b8e0", 00:15:55.044 "base_bdev": "nvme0n1", 00:15:55.044 "thin_provision": true, 00:15:55.044 "snapshot": false, 00:15:55.044 "clone": false, 00:15:55.044 "esnap_clone": false 00:15:55.044 } 00:15:55.044 } 00:15:55.044 } 00:15:55.044 ]' 00:15:55.044 20:23:09 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:55.305 20:23:09 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:55.305 20:23:09 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:55.305 20:23:10 -- common/autotest_common.sh@1363 -- # nb=26476544 00:15:55.305 20:23:10 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:15:55.305 20:23:10 -- common/autotest_common.sh@1367 -- # echo 103424 00:15:55.305 20:23:10 -- ftl/common.sh@48 -- # cache_size=5171 00:15:55.305 20:23:10 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:55.305 20:23:10 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:15:55.305 20:23:10 -- ftl/bdevperf.sh@26 -- # get_bdev_size 01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:55.305 20:23:10 -- common/autotest_common.sh@1357 -- # local bdev_name=01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:55.305 20:23:10 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:55.305 20:23:10 -- common/autotest_common.sh@1359 -- # local bs 00:15:55.305 20:23:10 -- common/autotest_common.sh@1360 -- # local nb 00:15:55.305 20:23:10 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01d0a459-8deb-40bd-b31c-aa95a363d13e 00:15:55.565 20:23:10 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:55.565 { 00:15:55.565 "name": "01d0a459-8deb-40bd-b31c-aa95a363d13e", 00:15:55.565 "aliases": [ 00:15:55.565 "lvs/nvme0n1p0" 00:15:55.565 ], 00:15:55.565 "product_name": "Logical Volume", 00:15:55.565 "block_size": 4096, 00:15:55.565 "num_blocks": 26476544, 00:15:55.565 "uuid": "01d0a459-8deb-40bd-b31c-aa95a363d13e", 00:15:55.565 "assigned_rate_limits": { 00:15:55.565 "rw_ios_per_sec": 0, 00:15:55.565 "rw_mbytes_per_sec": 0, 00:15:55.565 "r_mbytes_per_sec": 0, 00:15:55.565 "w_mbytes_per_sec": 0 00:15:55.565 }, 00:15:55.565 "claimed": false, 00:15:55.565 "zoned": false, 00:15:55.565 "supported_io_types": { 00:15:55.565 "read": true, 00:15:55.565 "write": true, 00:15:55.565 "unmap": true, 00:15:55.565 "write_zeroes": true, 00:15:55.565 "flush": false, 00:15:55.565 "reset": true, 00:15:55.565 "compare": false, 00:15:55.565 "compare_and_write": false, 00:15:55.565 "abort": false, 00:15:55.565 "nvme_admin": false, 00:15:55.565 "nvme_io": false 00:15:55.565 }, 00:15:55.565 "driver_specific": { 00:15:55.565 "lvol": { 00:15:55.565 "lvol_store_uuid": "c476d9c3-3fce-4994-801a-26086272b8e0", 00:15:55.565 "base_bdev": "nvme0n1", 00:15:55.565 "thin_provision": true, 00:15:55.565 "snapshot": false, 00:15:55.565 "clone": false, 00:15:55.565 "esnap_clone": false 00:15:55.565 } 00:15:55.565 } 00:15:55.565 } 00:15:55.565 ]' 00:15:55.565 20:23:10 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:55.565 20:23:10 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:55.565 20:23:10 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:55.565 20:23:10 -- common/autotest_common.sh@1363 -- # nb=26476544 00:15:55.565 20:23:10 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:15:55.565 20:23:10 -- common/autotest_common.sh@1367 -- # echo 103424 00:15:55.565 20:23:10 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:15:55.565 20:23:10 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 01d0a459-8deb-40bd-b31c-aa95a363d13e -c nvc0n1p0 --l2p_dram_limit 20 00:15:55.826 [2024-10-16 20:23:10.627351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.826 [2024-10-16 20:23:10.627470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:55.826 [2024-10-16 20:23:10.627490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:55.826 [2024-10-16 20:23:10.627497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.826 [2024-10-16 20:23:10.627540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.826 [2024-10-16 20:23:10.627547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:55.826 [2024-10-16 20:23:10.627555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:55.826 [2024-10-16 20:23:10.627561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.826 [2024-10-16 20:23:10.627575] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:55.826 [2024-10-16 20:23:10.628157] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:55.826 [2024-10-16 20:23:10.628173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.826 [2024-10-16 20:23:10.628179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:55.826 [2024-10-16 20:23:10.628187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:15:55.826 [2024-10-16 20:23:10.628192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.826 [2024-10-16 20:23:10.628238] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4bf29a9b-b20b-499e-becd-0c552d2da929 00:15:55.826 [2024-10-16 20:23:10.629153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.826 [2024-10-16 20:23:10.629181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:55.826 [2024-10-16 20:23:10.629188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:55.826 [2024-10-16 20:23:10.629196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.826 [2024-10-16 20:23:10.633913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.827 [2024-10-16 20:23:10.633943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:55.827 [2024-10-16 20:23:10.633950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.690 ms 00:15:55.827 [2024-10-16 20:23:10.633957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.827 [2024-10-16 20:23:10.634021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.827 [2024-10-16 20:23:10.634029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:55.827 [2024-10-16 20:23:10.634035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:55.827 [2024-10-16 20:23:10.634053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.827 [2024-10-16 20:23:10.634088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.827 [2024-10-16 20:23:10.634096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:55.827 [2024-10-16 20:23:10.634104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:55.827 [2024-10-16 20:23:10.634111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.827 [2024-10-16 20:23:10.634127] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:55.827 [2024-10-16 20:23:10.637055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.827 [2024-10-16 20:23:10.637080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:55.827 [2024-10-16 20:23:10.637089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:15:55.827 [2024-10-16 20:23:10.637095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.827 [2024-10-16 20:23:10.637121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.827 [2024-10-16 20:23:10.637127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:55.827 [2024-10-16 20:23:10.637135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:55.827 [2024-10-16 20:23:10.637140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.827 [2024-10-16 20:23:10.637158] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:55.827 [2024-10-16 20:23:10.637247] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:55.827 [2024-10-16 20:23:10.637259] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:55.827 [2024-10-16 20:23:10.637267] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:55.827 [2024-10-16 20:23:10.637276] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637283] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637290] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:55.827 [2024-10-16 20:23:10.637295] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:55.827 [2024-10-16 20:23:10.637305] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:55.827 [2024-10-16 20:23:10.637310] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:55.827 [2024-10-16 20:23:10.637317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.827 [2024-10-16 20:23:10.637323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:55.827 [2024-10-16 20:23:10.637330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:15:55.827 [2024-10-16 20:23:10.637335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.827 [2024-10-16 20:23:10.637383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.827 [2024-10-16 20:23:10.637388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:55.827 [2024-10-16 20:23:10.637395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:55.827 [2024-10-16 20:23:10.637401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.827 [2024-10-16 20:23:10.637456] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:55.827 [2024-10-16 20:23:10.637463] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:55.827 [2024-10-16 20:23:10.637470] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637488] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:55.827 [2024-10-16 20:23:10.637493] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:55.827 [2024-10-16 20:23:10.637510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:55.827 [2024-10-16 20:23:10.637522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:55.827 [2024-10-16 20:23:10.637527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:55.827 [2024-10-16 20:23:10.637533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:55.827 [2024-10-16 20:23:10.637539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:55.827 [2024-10-16 20:23:10.637545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:55.827 [2024-10-16 20:23:10.637549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:55.827 [2024-10-16 20:23:10.637562] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:55.827 [2024-10-16 20:23:10.637569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637574] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:55.827 [2024-10-16 20:23:10.637580] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:55.827 [2024-10-16 20:23:10.637585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637591] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:55.827 [2024-10-16 20:23:10.637596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:55.827 [2024-10-16 20:23:10.637615] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:55.827 [2024-10-16 20:23:10.637631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:55.827 [2024-10-16 20:23:10.637650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:55.827 [2024-10-16 20:23:10.637666] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:55.827 [2024-10-16 20:23:10.637678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:55.827 [2024-10-16 20:23:10.637684] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:55.827 [2024-10-16 20:23:10.637689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:55.827 [2024-10-16 20:23:10.637695] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:55.827 [2024-10-16 20:23:10.637701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:55.827 [2024-10-16 20:23:10.637708] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.827 [2024-10-16 20:23:10.637721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:55.827 [2024-10-16 20:23:10.637726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:55.827 [2024-10-16 20:23:10.637732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:55.827 [2024-10-16 20:23:10.637738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:55.827 [2024-10-16 20:23:10.637745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:55.827 [2024-10-16 20:23:10.637750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:55.827 [2024-10-16 20:23:10.637756] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:55.827 [2024-10-16 20:23:10.637763] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:55.827 [2024-10-16 20:23:10.637772] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:55.827 [2024-10-16 20:23:10.637777] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:55.827 [2024-10-16 20:23:10.637784] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:55.827 [2024-10-16 20:23:10.637789] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:55.827 [2024-10-16 20:23:10.637796] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:55.827 [2024-10-16 20:23:10.637802] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:55.827 [2024-10-16 20:23:10.637818] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:55.827 [2024-10-16 20:23:10.637824] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:55.827 [2024-10-16 20:23:10.637831] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:55.827 [2024-10-16 20:23:10.637836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:55.827 [2024-10-16 20:23:10.637844] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:55.827 [2024-10-16 20:23:10.637850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:55.827 [2024-10-16 20:23:10.637859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:55.827 [2024-10-16 20:23:10.637865] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:55.827 [2024-10-16 20:23:10.637873] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:55.827 [2024-10-16 20:23:10.637879] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:55.827 [2024-10-16 20:23:10.637886] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:55.828 [2024-10-16 20:23:10.637891] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:55.828 [2024-10-16 20:23:10.637898] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:55.828 [2024-10-16 20:23:10.637903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.828 [2024-10-16 20:23:10.637911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:55.828 [2024-10-16 20:23:10.637916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:15:55.828 [2024-10-16 20:23:10.637922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.828 [2024-10-16 20:23:10.649835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.828 [2024-10-16 20:23:10.649945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:55.828 [2024-10-16 20:23:10.649958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.888 ms 00:15:55.828 [2024-10-16 20:23:10.649965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.828 [2024-10-16 20:23:10.650031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.828 [2024-10-16 20:23:10.650068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:55.828 [2024-10-16 20:23:10.650076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:15:55.828 [2024-10-16 20:23:10.650083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.828 [2024-10-16 20:23:10.684340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.828 [2024-10-16 20:23:10.684373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:55.828 [2024-10-16 20:23:10.684383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.222 ms 00:15:55.828 [2024-10-16 20:23:10.684391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.828 [2024-10-16 20:23:10.684419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.828 [2024-10-16 20:23:10.684429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:55.828 [2024-10-16 20:23:10.684436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:55.828 [2024-10-16 20:23:10.684443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.828 [2024-10-16 20:23:10.684764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.828 [2024-10-16 20:23:10.684779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:55.828 [2024-10-16 20:23:10.684786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:15:55.828 [2024-10-16 20:23:10.684793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.828 [2024-10-16 20:23:10.684877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.828 [2024-10-16 20:23:10.684886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:55.828 [2024-10-16 20:23:10.684894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:15:55.828 [2024-10-16 20:23:10.684901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.828 [2024-10-16 20:23:10.696273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.828 [2024-10-16 20:23:10.696300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:55.828 [2024-10-16 20:23:10.696310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.361 ms 00:15:55.828 [2024-10-16 20:23:10.696317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.828 [2024-10-16 20:23:10.705362] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:55.828 [2024-10-16 20:23:10.709649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.828 [2024-10-16 20:23:10.709673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:55.828 [2024-10-16 20:23:10.709682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.273 ms 00:15:55.828 [2024-10-16 20:23:10.709689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.087 [2024-10-16 20:23:10.780190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.088 [2024-10-16 20:23:10.780225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:56.088 [2024-10-16 20:23:10.780236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.480 ms 00:15:56.088 [2024-10-16 20:23:10.780242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.088 [2024-10-16 20:23:10.780276] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:56.088 [2024-10-16 20:23:10.780284] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:00.292 [2024-10-16 20:23:14.883361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.292 [2024-10-16 20:23:14.883428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:00.292 [2024-10-16 20:23:14.883445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4103.064 ms 00:16:00.292 [2024-10-16 20:23:14.883453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.292 [2024-10-16 20:23:14.883627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.293 [2024-10-16 20:23:14.883636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:00.293 [2024-10-16 20:23:14.883646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:00.293 [2024-10-16 20:23:14.883653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.293 [2024-10-16 20:23:14.903186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.293 [2024-10-16 20:23:14.903447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:00.293 [2024-10-16 20:23:14.903468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.503 ms 00:16:00.293 [2024-10-16 20:23:14.903477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.293 [2024-10-16 20:23:14.921410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.293 [2024-10-16 20:23:14.921440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:00.293 [2024-10-16 20:23:14.921453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.901 ms 00:16:00.293 [2024-10-16 20:23:14.921458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.293 [2024-10-16 20:23:14.921709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.293 [2024-10-16 20:23:14.921719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:00.293 [2024-10-16 20:23:14.921727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:16:00.293 [2024-10-16 20:23:14.921732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.293 [2024-10-16 20:23:14.972659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.293 [2024-10-16 20:23:14.972686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:00.293 [2024-10-16 20:23:14.972696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.893 ms 00:16:00.293 [2024-10-16 20:23:14.972702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.293 [2024-10-16 20:23:14.991391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.293 [2024-10-16 20:23:14.991418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:00.293 [2024-10-16 20:23:14.991428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.595 ms 00:16:00.293 [2024-10-16 20:23:14.991434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.293 [2024-10-16 20:23:14.992398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.293 [2024-10-16 20:23:14.992423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:00.293 [2024-10-16 20:23:14.992433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:16:00.293 [2024-10-16 20:23:14.992440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.293 [2024-10-16 20:23:15.010385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.293 [2024-10-16 20:23:15.010487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:00.293 [2024-10-16 20:23:15.010502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.911 ms 00:16:00.293 [2024-10-16 20:23:15.010507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.293 [2024-10-16 20:23:15.010527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.293 [2024-10-16 20:23:15.010532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:00.293 [2024-10-16 20:23:15.010542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:00.293 [2024-10-16 20:23:15.010547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.293 [2024-10-16 20:23:15.010606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.293 [2024-10-16 20:23:15.010612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:00.293 [2024-10-16 20:23:15.010620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:00.293 [2024-10-16 20:23:15.010625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.293 [2024-10-16 20:23:15.011295] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4383.598 ms, result 0 00:16:00.293 { 00:16:00.293 "name": "ftl0", 00:16:00.293 "uuid": "4bf29a9b-b20b-499e-becd-0c552d2da929" 00:16:00.293 } 00:16:00.293 20:23:15 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:00.293 20:23:15 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:00.293 20:23:15 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:00.554 20:23:15 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:00.554 [2024-10-16 20:23:15.299508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:00.554 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:00.554 Zero copy mechanism will not be used. 00:16:00.554 Running I/O for 4 seconds... 00:16:04.760 00:16:04.760 Latency(us) 00:16:04.760 [2024-10-16T20:23:19.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.760 [2024-10-16T20:23:19.689Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:04.760 ftl0 : 4.00 1474.89 97.94 0.00 0.00 713.72 164.63 2054.30 00:16:04.760 [2024-10-16T20:23:19.689Z] =================================================================================================================== 00:16:04.760 [2024-10-16T20:23:19.689Z] Total : 1474.89 97.94 0.00 0.00 713.72 164.63 2054.30 00:16:04.760 [2024-10-16 20:23:19.305871] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:04.760 0 00:16:04.760 20:23:19 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:04.760 [2024-10-16 20:23:19.418640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:04.760 Running I/O for 4 seconds... 00:16:08.969 00:16:08.969 Latency(us) 00:16:08.969 [2024-10-16T20:23:23.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.969 [2024-10-16T20:23:23.898Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.969 ftl0 : 4.03 5521.97 21.57 0.00 0.00 23094.18 283.57 45169.43 00:16:08.969 [2024-10-16T20:23:23.898Z] =================================================================================================================== 00:16:08.969 [2024-10-16T20:23:23.898Z] Total : 5521.97 21.57 0.00 0.00 23094.18 0.00 45169.43 00:16:08.969 [2024-10-16 20:23:23.455930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:08.969 0 00:16:08.969 20:23:23 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:08.969 [2024-10-16 20:23:23.571871] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:08.969 Running I/O for 4 seconds... 00:16:13.180 00:16:13.180 Latency(us) 00:16:13.180 [2024-10-16T20:23:28.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.180 [2024-10-16T20:23:28.109Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:13.180 Verification LBA range: start 0x0 length 0x1400000 00:16:13.180 ftl0 : 4.01 11409.42 44.57 0.00 0.00 11192.85 151.24 23391.31 00:16:13.180 [2024-10-16T20:23:28.109Z] =================================================================================================================== 00:16:13.180 [2024-10-16T20:23:28.109Z] Total : 11409.42 44.57 0.00 0.00 11192.85 0.00 23391.31 00:16:13.180 [2024-10-16 20:23:27.592963] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:13.180 0 00:16:13.180 20:23:27 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:13.180 [2024-10-16 20:23:27.779055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.180 [2024-10-16 20:23:27.779110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:13.180 [2024-10-16 20:23:27.779127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:13.180 [2024-10-16 20:23:27.779135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.180 [2024-10-16 20:23:27.779159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:13.180 [2024-10-16 20:23:27.782064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.180 [2024-10-16 20:23:27.782112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:13.180 [2024-10-16 20:23:27.782124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:16:13.180 [2024-10-16 20:23:27.782137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.180 [2024-10-16 20:23:27.785294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.180 [2024-10-16 20:23:27.785345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:13.180 [2024-10-16 20:23:27.785356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.129 ms 00:16:13.180 [2024-10-16 20:23:27.785366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.180 [2024-10-16 20:23:27.994956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.180 [2024-10-16 20:23:27.995033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:13.180 [2024-10-16 20:23:27.995072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 209.569 ms 00:16:13.180 [2024-10-16 20:23:27.995083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.180 [2024-10-16 20:23:28.001239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.180 [2024-10-16 20:23:28.001291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:13.180 [2024-10-16 20:23:28.001305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.108 ms 00:16:13.180 [2024-10-16 20:23:28.001315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.180 [2024-10-16 20:23:28.029025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.180 [2024-10-16 20:23:28.029093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:13.180 [2024-10-16 20:23:28.029107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.643 ms 00:16:13.180 [2024-10-16 20:23:28.029122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.180 [2024-10-16 20:23:28.047512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.180 [2024-10-16 20:23:28.047739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:13.180 [2024-10-16 20:23:28.047765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.338 ms 00:16:13.180 [2024-10-16 20:23:28.047776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.180 [2024-10-16 20:23:28.047938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.180 [2024-10-16 20:23:28.047954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:13.180 [2024-10-16 20:23:28.047963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:16:13.180 [2024-10-16 20:23:28.047973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.180 [2024-10-16 20:23:28.075334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.180 [2024-10-16 20:23:28.075392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:13.181 [2024-10-16 20:23:28.075404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.344 ms 00:16:13.181 [2024-10-16 20:23:28.075414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.181 [2024-10-16 20:23:28.101803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.181 [2024-10-16 20:23:28.101859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:13.181 [2024-10-16 20:23:28.101872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.338 ms 00:16:13.181 [2024-10-16 20:23:28.101884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.442 [2024-10-16 20:23:28.127626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.442 [2024-10-16 20:23:28.127686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:13.442 [2024-10-16 20:23:28.127699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.691 ms 00:16:13.442 [2024-10-16 20:23:28.127709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.442 [2024-10-16 20:23:28.153722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.442 [2024-10-16 20:23:28.153939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:13.442 [2024-10-16 20:23:28.153962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.907 ms 00:16:13.442 [2024-10-16 20:23:28.153972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.442 [2024-10-16 20:23:28.154123] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:13.442 [2024-10-16 20:23:28.154166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:13.442 [2024-10-16 20:23:28.154468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.154991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:13.443 [2024-10-16 20:23:28.155170] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:13.443 [2024-10-16 20:23:28.155179] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4bf29a9b-b20b-499e-becd-0c552d2da929 00:16:13.443 [2024-10-16 20:23:28.155192] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:13.443 [2024-10-16 20:23:28.155199] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:13.443 [2024-10-16 20:23:28.155209] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:13.443 [2024-10-16 20:23:28.155217] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:13.443 [2024-10-16 20:23:28.155227] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:13.443 [2024-10-16 20:23:28.155237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:13.443 [2024-10-16 20:23:28.155247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:13.443 [2024-10-16 20:23:28.155253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:13.443 [2024-10-16 20:23:28.155261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:13.443 [2024-10-16 20:23:28.155269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.443 [2024-10-16 20:23:28.155279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:13.443 [2024-10-16 20:23:28.155288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:16:13.443 [2024-10-16 20:23:28.155297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.443 [2024-10-16 20:23:28.169510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.443 [2024-10-16 20:23:28.169694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:13.443 [2024-10-16 20:23:28.170144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.154 ms 00:16:13.443 [2024-10-16 20:23:28.170217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.443 [2024-10-16 20:23:28.170513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.443 [2024-10-16 20:23:28.170657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:13.443 [2024-10-16 20:23:28.170711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:16:13.443 [2024-10-16 20:23:28.170740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.443 [2024-10-16 20:23:28.212931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.443 [2024-10-16 20:23:28.213149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:13.443 [2024-10-16 20:23:28.213238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.443 [2024-10-16 20:23:28.213265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.213358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.213382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:13.444 [2024-10-16 20:23:28.213403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.213425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.213528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.213660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:13.444 [2024-10-16 20:23:28.213686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.213713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.213746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.213785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:13.444 [2024-10-16 20:23:28.213805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.213827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.294825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.295079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:13.444 [2024-10-16 20:23:28.295203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.295236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.327167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.327360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:13.444 [2024-10-16 20:23:28.327423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.327449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.327537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.327564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:13.444 [2024-10-16 20:23:28.327586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.327611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.327669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.327763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:13.444 [2024-10-16 20:23:28.327790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.327812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.327945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.327974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:13.444 [2024-10-16 20:23:28.327994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.328016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.328226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.328268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:13.444 [2024-10-16 20:23:28.328293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.328315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.328428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.328459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:13.444 [2024-10-16 20:23:28.328480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.328505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.328680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.444 [2024-10-16 20:23:28.328710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:13.444 [2024-10-16 20:23:28.328732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.444 [2024-10-16 20:23:28.328753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.444 [2024-10-16 20:23:28.328918] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 549.825 ms, result 0 00:16:13.444 true 00:16:13.444 20:23:28 -- ftl/bdevperf.sh@37 -- # killprocess 71587 00:16:13.444 20:23:28 -- common/autotest_common.sh@926 -- # '[' -z 71587 ']' 00:16:13.444 20:23:28 -- common/autotest_common.sh@930 -- # kill -0 71587 00:16:13.444 20:23:28 -- common/autotest_common.sh@931 -- # uname 00:16:13.444 20:23:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:13.444 20:23:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71587 00:16:13.704 killing process with pid 71587 00:16:13.704 Received shutdown signal, test time was about 4.000000 seconds 00:16:13.704 00:16:13.704 Latency(us) 00:16:13.704 [2024-10-16T20:23:28.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.704 [2024-10-16T20:23:28.633Z] =================================================================================================================== 00:16:13.704 [2024-10-16T20:23:28.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.704 20:23:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:13.704 20:23:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:13.704 20:23:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71587' 00:16:13.704 20:23:28 -- common/autotest_common.sh@945 -- # kill 71587 00:16:13.704 20:23:28 -- common/autotest_common.sh@950 -- # wait 71587 00:16:16.252 20:23:31 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:16.252 20:23:31 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:16.252 20:23:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:16.252 20:23:31 -- common/autotest_common.sh@10 -- # set +x 00:16:16.513 Remove shared memory files 00:16:16.513 20:23:31 -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:16.513 20:23:31 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:16.513 20:23:31 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:16.513 20:23:31 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:16.513 20:23:31 -- ftl/common.sh@207 -- # rm -f rm -f 00:16:16.513 20:23:31 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:16.513 20:23:31 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:16.513 ************************************ 00:16:16.513 END TEST ftl_bdevperf 00:16:16.513 ************************************ 00:16:16.513 00:16:16.513 real 0m24.350s 00:16:16.513 user 0m26.733s 00:16:16.513 sys 0m0.948s 00:16:16.513 20:23:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.513 20:23:31 -- common/autotest_common.sh@10 -- # set +x 00:16:16.513 20:23:31 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:16.513 20:23:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:16.513 20:23:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.513 20:23:31 -- common/autotest_common.sh@10 -- # set +x 00:16:16.513 ************************************ 00:16:16.513 START TEST ftl_trim 00:16:16.513 ************************************ 00:16:16.513 20:23:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:16.513 * Looking for test storage... 00:16:16.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.513 20:23:31 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:16.513 20:23:31 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:16.513 20:23:31 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.513 20:23:31 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.513 20:23:31 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:16.513 20:23:31 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:16.513 20:23:31 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.513 20:23:31 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:16.513 20:23:31 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:16.513 20:23:31 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.513 20:23:31 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.513 20:23:31 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:16.513 20:23:31 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:16.513 20:23:31 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:16.513 20:23:31 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:16.513 20:23:31 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:16.513 20:23:31 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:16.513 20:23:31 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.513 20:23:31 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.513 20:23:31 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:16.513 20:23:31 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:16.513 20:23:31 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:16.513 20:23:31 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:16.513 20:23:31 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:16.513 20:23:31 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:16.513 20:23:31 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:16.513 20:23:31 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:16.513 20:23:31 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:16.513 20:23:31 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:16.513 20:23:31 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.513 20:23:31 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:16:16.514 20:23:31 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:16:16.514 20:23:31 -- ftl/trim.sh@25 -- # timeout=240 00:16:16.514 20:23:31 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:16.514 20:23:31 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:16.514 20:23:31 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:16.514 20:23:31 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:16.514 20:23:31 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:16.514 20:23:31 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:16.514 20:23:31 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:16.514 20:23:31 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:16.514 20:23:31 -- ftl/trim.sh@40 -- # svcpid=71966 00:16:16.514 20:23:31 -- ftl/trim.sh@41 -- # waitforlisten 71966 00:16:16.514 20:23:31 -- common/autotest_common.sh@819 -- # '[' -z 71966 ']' 00:16:16.514 20:23:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.514 20:23:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.514 20:23:31 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:16.514 20:23:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.514 20:23:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.514 20:23:31 -- common/autotest_common.sh@10 -- # set +x 00:16:16.775 [2024-10-16 20:23:31.474004] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:16.775 [2024-10-16 20:23:31.474166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71966 ] 00:16:16.775 [2024-10-16 20:23:31.627800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.037 [2024-10-16 20:23:31.857317] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.037 [2024-10-16 20:23:31.858104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.037 [2024-10-16 20:23:31.858302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.037 [2024-10-16 20:23:31.858415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.472 20:23:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.472 20:23:33 -- common/autotest_common.sh@852 -- # return 0 00:16:18.472 20:23:33 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:18.472 20:23:33 -- ftl/common.sh@54 -- # local name=nvme0 00:16:18.472 20:23:33 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:18.472 20:23:33 -- ftl/common.sh@56 -- # local size=103424 00:16:18.472 20:23:33 -- ftl/common.sh@59 -- # local base_bdev 00:16:18.472 20:23:33 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:18.472 20:23:33 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:18.472 20:23:33 -- ftl/common.sh@62 -- # local base_size 00:16:18.472 20:23:33 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:18.472 20:23:33 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:16:18.472 20:23:33 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:18.472 20:23:33 -- common/autotest_common.sh@1359 -- # local bs 00:16:18.472 20:23:33 -- common/autotest_common.sh@1360 -- # local nb 00:16:18.472 20:23:33 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:18.733 20:23:33 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:18.733 { 00:16:18.733 "name": "nvme0n1", 00:16:18.733 "aliases": [ 00:16:18.733 "b7008476-999e-4055-9e04-7e08697e9832" 00:16:18.733 ], 00:16:18.733 "product_name": "NVMe disk", 00:16:18.733 "block_size": 4096, 00:16:18.734 "num_blocks": 1310720, 00:16:18.734 "uuid": "b7008476-999e-4055-9e04-7e08697e9832", 00:16:18.734 "assigned_rate_limits": { 00:16:18.734 "rw_ios_per_sec": 0, 00:16:18.734 "rw_mbytes_per_sec": 0, 00:16:18.734 "r_mbytes_per_sec": 0, 00:16:18.734 "w_mbytes_per_sec": 0 00:16:18.734 }, 00:16:18.734 "claimed": true, 00:16:18.734 "claim_type": "read_many_write_one", 00:16:18.734 "zoned": false, 00:16:18.734 "supported_io_types": { 00:16:18.734 "read": true, 00:16:18.734 "write": true, 00:16:18.734 "unmap": true, 00:16:18.734 "write_zeroes": true, 00:16:18.734 "flush": true, 00:16:18.734 "reset": true, 00:16:18.734 "compare": true, 00:16:18.734 "compare_and_write": false, 00:16:18.734 "abort": true, 00:16:18.734 "nvme_admin": true, 00:16:18.734 "nvme_io": true 00:16:18.734 }, 00:16:18.734 "driver_specific": { 00:16:18.734 "nvme": [ 00:16:18.734 { 00:16:18.734 "pci_address": "0000:00:07.0", 00:16:18.734 "trid": { 00:16:18.734 "trtype": "PCIe", 00:16:18.734 "traddr": "0000:00:07.0" 00:16:18.734 }, 00:16:18.734 "ctrlr_data": { 00:16:18.734 "cntlid": 0, 00:16:18.734 "vendor_id": "0x1b36", 00:16:18.734 "model_number": "QEMU NVMe Ctrl", 00:16:18.734 "serial_number": "12341", 00:16:18.734 "firmware_revision": "8.0.0", 00:16:18.734 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:18.734 "oacs": { 00:16:18.734 "security": 0, 00:16:18.734 "format": 1, 00:16:18.734 "firmware": 0, 00:16:18.734 "ns_manage": 1 00:16:18.734 }, 00:16:18.734 "multi_ctrlr": false, 00:16:18.734 "ana_reporting": false 00:16:18.734 }, 00:16:18.734 "vs": { 00:16:18.734 "nvme_version": "1.4" 00:16:18.734 }, 00:16:18.734 "ns_data": { 00:16:18.734 "id": 1, 00:16:18.734 "can_share": false 00:16:18.734 } 00:16:18.734 } 00:16:18.734 ], 00:16:18.734 "mp_policy": "active_passive" 00:16:18.734 } 00:16:18.734 } 00:16:18.734 ]' 00:16:18.734 20:23:33 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:18.734 20:23:33 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:18.734 20:23:33 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:18.734 20:23:33 -- common/autotest_common.sh@1363 -- # nb=1310720 00:16:18.734 20:23:33 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:16:18.734 20:23:33 -- common/autotest_common.sh@1367 -- # echo 5120 00:16:18.734 20:23:33 -- ftl/common.sh@63 -- # base_size=5120 00:16:18.734 20:23:33 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:18.734 20:23:33 -- ftl/common.sh@67 -- # clear_lvols 00:16:18.734 20:23:33 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:18.734 20:23:33 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:18.996 20:23:33 -- ftl/common.sh@28 -- # stores=c476d9c3-3fce-4994-801a-26086272b8e0 00:16:18.996 20:23:33 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:18.996 20:23:33 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c476d9c3-3fce-4994-801a-26086272b8e0 00:16:19.257 20:23:33 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:19.257 20:23:34 -- ftl/common.sh@68 -- # lvs=10814edb-d737-483b-84ad-c8bbb50843e3 00:16:19.257 20:23:34 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 10814edb-d737-483b-84ad-c8bbb50843e3 00:16:19.519 20:23:34 -- ftl/trim.sh@43 -- # split_bdev=b86ae86f-1cb4-4342-a345-6c298048b801 00:16:19.519 20:23:34 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 b86ae86f-1cb4-4342-a345-6c298048b801 00:16:19.519 20:23:34 -- ftl/common.sh@35 -- # local name=nvc0 00:16:19.519 20:23:34 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:19.519 20:23:34 -- ftl/common.sh@37 -- # local base_bdev=b86ae86f-1cb4-4342-a345-6c298048b801 00:16:19.519 20:23:34 -- ftl/common.sh@38 -- # local cache_size= 00:16:19.519 20:23:34 -- ftl/common.sh@41 -- # get_bdev_size b86ae86f-1cb4-4342-a345-6c298048b801 00:16:19.519 20:23:34 -- common/autotest_common.sh@1357 -- # local bdev_name=b86ae86f-1cb4-4342-a345-6c298048b801 00:16:19.519 20:23:34 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:19.519 20:23:34 -- common/autotest_common.sh@1359 -- # local bs 00:16:19.519 20:23:34 -- common/autotest_common.sh@1360 -- # local nb 00:16:19.519 20:23:34 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b86ae86f-1cb4-4342-a345-6c298048b801 00:16:19.780 20:23:34 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:19.780 { 00:16:19.780 "name": "b86ae86f-1cb4-4342-a345-6c298048b801", 00:16:19.780 "aliases": [ 00:16:19.780 "lvs/nvme0n1p0" 00:16:19.780 ], 00:16:19.780 "product_name": "Logical Volume", 00:16:19.780 "block_size": 4096, 00:16:19.780 "num_blocks": 26476544, 00:16:19.780 "uuid": "b86ae86f-1cb4-4342-a345-6c298048b801", 00:16:19.780 "assigned_rate_limits": { 00:16:19.780 "rw_ios_per_sec": 0, 00:16:19.780 "rw_mbytes_per_sec": 0, 00:16:19.780 "r_mbytes_per_sec": 0, 00:16:19.780 "w_mbytes_per_sec": 0 00:16:19.780 }, 00:16:19.780 "claimed": false, 00:16:19.780 "zoned": false, 00:16:19.780 "supported_io_types": { 00:16:19.780 "read": true, 00:16:19.780 "write": true, 00:16:19.780 "unmap": true, 00:16:19.780 "write_zeroes": true, 00:16:19.780 "flush": false, 00:16:19.780 "reset": true, 00:16:19.780 "compare": false, 00:16:19.780 "compare_and_write": false, 00:16:19.780 "abort": false, 00:16:19.780 "nvme_admin": false, 00:16:19.780 "nvme_io": false 00:16:19.780 }, 00:16:19.780 "driver_specific": { 00:16:19.780 "lvol": { 00:16:19.780 "lvol_store_uuid": "10814edb-d737-483b-84ad-c8bbb50843e3", 00:16:19.780 "base_bdev": "nvme0n1", 00:16:19.780 "thin_provision": true, 00:16:19.780 "snapshot": false, 00:16:19.780 "clone": false, 00:16:19.780 "esnap_clone": false 00:16:19.780 } 00:16:19.780 } 00:16:19.780 } 00:16:19.780 ]' 00:16:19.780 20:23:34 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:19.780 20:23:34 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:19.780 20:23:34 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:19.781 20:23:34 -- common/autotest_common.sh@1363 -- # nb=26476544 00:16:19.781 20:23:34 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:16:19.781 20:23:34 -- common/autotest_common.sh@1367 -- # echo 103424 00:16:19.781 20:23:34 -- ftl/common.sh@41 -- # local base_size=5171 00:16:19.781 20:23:34 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:19.781 20:23:34 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:20.040 20:23:34 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:20.040 20:23:34 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:20.040 20:23:34 -- ftl/common.sh@48 -- # get_bdev_size b86ae86f-1cb4-4342-a345-6c298048b801 00:16:20.040 20:23:34 -- common/autotest_common.sh@1357 -- # local bdev_name=b86ae86f-1cb4-4342-a345-6c298048b801 00:16:20.040 20:23:34 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:20.040 20:23:34 -- common/autotest_common.sh@1359 -- # local bs 00:16:20.040 20:23:34 -- common/autotest_common.sh@1360 -- # local nb 00:16:20.040 20:23:34 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b86ae86f-1cb4-4342-a345-6c298048b801 00:16:20.298 20:23:35 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:20.298 { 00:16:20.298 "name": "b86ae86f-1cb4-4342-a345-6c298048b801", 00:16:20.298 "aliases": [ 00:16:20.298 "lvs/nvme0n1p0" 00:16:20.298 ], 00:16:20.298 "product_name": "Logical Volume", 00:16:20.298 "block_size": 4096, 00:16:20.298 "num_blocks": 26476544, 00:16:20.298 "uuid": "b86ae86f-1cb4-4342-a345-6c298048b801", 00:16:20.298 "assigned_rate_limits": { 00:16:20.298 "rw_ios_per_sec": 0, 00:16:20.298 "rw_mbytes_per_sec": 0, 00:16:20.298 "r_mbytes_per_sec": 0, 00:16:20.298 "w_mbytes_per_sec": 0 00:16:20.298 }, 00:16:20.298 "claimed": false, 00:16:20.298 "zoned": false, 00:16:20.298 "supported_io_types": { 00:16:20.298 "read": true, 00:16:20.298 "write": true, 00:16:20.298 "unmap": true, 00:16:20.298 "write_zeroes": true, 00:16:20.298 "flush": false, 00:16:20.298 "reset": true, 00:16:20.298 "compare": false, 00:16:20.298 "compare_and_write": false, 00:16:20.298 "abort": false, 00:16:20.298 "nvme_admin": false, 00:16:20.298 "nvme_io": false 00:16:20.298 }, 00:16:20.298 "driver_specific": { 00:16:20.298 "lvol": { 00:16:20.298 "lvol_store_uuid": "10814edb-d737-483b-84ad-c8bbb50843e3", 00:16:20.298 "base_bdev": "nvme0n1", 00:16:20.298 "thin_provision": true, 00:16:20.298 "snapshot": false, 00:16:20.298 "clone": false, 00:16:20.298 "esnap_clone": false 00:16:20.298 } 00:16:20.298 } 00:16:20.298 } 00:16:20.298 ]' 00:16:20.298 20:23:35 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:20.298 20:23:35 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:20.298 20:23:35 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:20.298 20:23:35 -- common/autotest_common.sh@1363 -- # nb=26476544 00:16:20.298 20:23:35 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:16:20.298 20:23:35 -- common/autotest_common.sh@1367 -- # echo 103424 00:16:20.298 20:23:35 -- ftl/common.sh@48 -- # cache_size=5171 00:16:20.298 20:23:35 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:20.556 20:23:35 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:20.556 20:23:35 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:20.556 20:23:35 -- ftl/trim.sh@47 -- # get_bdev_size b86ae86f-1cb4-4342-a345-6c298048b801 00:16:20.556 20:23:35 -- common/autotest_common.sh@1357 -- # local bdev_name=b86ae86f-1cb4-4342-a345-6c298048b801 00:16:20.556 20:23:35 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:20.556 20:23:35 -- common/autotest_common.sh@1359 -- # local bs 00:16:20.556 20:23:35 -- common/autotest_common.sh@1360 -- # local nb 00:16:20.557 20:23:35 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b86ae86f-1cb4-4342-a345-6c298048b801 00:16:20.557 20:23:35 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:20.557 { 00:16:20.557 "name": "b86ae86f-1cb4-4342-a345-6c298048b801", 00:16:20.557 "aliases": [ 00:16:20.557 "lvs/nvme0n1p0" 00:16:20.557 ], 00:16:20.557 "product_name": "Logical Volume", 00:16:20.557 "block_size": 4096, 00:16:20.557 "num_blocks": 26476544, 00:16:20.557 "uuid": "b86ae86f-1cb4-4342-a345-6c298048b801", 00:16:20.557 "assigned_rate_limits": { 00:16:20.557 "rw_ios_per_sec": 0, 00:16:20.557 "rw_mbytes_per_sec": 0, 00:16:20.557 "r_mbytes_per_sec": 0, 00:16:20.557 "w_mbytes_per_sec": 0 00:16:20.557 }, 00:16:20.557 "claimed": false, 00:16:20.557 "zoned": false, 00:16:20.557 "supported_io_types": { 00:16:20.557 "read": true, 00:16:20.557 "write": true, 00:16:20.557 "unmap": true, 00:16:20.557 "write_zeroes": true, 00:16:20.557 "flush": false, 00:16:20.557 "reset": true, 00:16:20.557 "compare": false, 00:16:20.557 "compare_and_write": false, 00:16:20.557 "abort": false, 00:16:20.557 "nvme_admin": false, 00:16:20.557 "nvme_io": false 00:16:20.557 }, 00:16:20.557 "driver_specific": { 00:16:20.557 "lvol": { 00:16:20.557 "lvol_store_uuid": "10814edb-d737-483b-84ad-c8bbb50843e3", 00:16:20.557 "base_bdev": "nvme0n1", 00:16:20.557 "thin_provision": true, 00:16:20.557 "snapshot": false, 00:16:20.557 "clone": false, 00:16:20.557 "esnap_clone": false 00:16:20.557 } 00:16:20.557 } 00:16:20.557 } 00:16:20.557 ]' 00:16:20.557 20:23:35 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:20.816 20:23:35 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:20.816 20:23:35 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:20.816 20:23:35 -- common/autotest_common.sh@1363 -- # nb=26476544 00:16:20.816 20:23:35 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:16:20.816 20:23:35 -- common/autotest_common.sh@1367 -- # echo 103424 00:16:20.816 20:23:35 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:20.816 20:23:35 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b86ae86f-1cb4-4342-a345-6c298048b801 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:20.816 [2024-10-16 20:23:35.699931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.699974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:20.816 [2024-10-16 20:23:35.699987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:20.816 [2024-10-16 20:23:35.699994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.702282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.702305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:20.816 [2024-10-16 20:23:35.702314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.259 ms 00:16:20.816 [2024-10-16 20:23:35.702320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.702404] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:20.816 [2024-10-16 20:23:35.702961] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:20.816 [2024-10-16 20:23:35.702977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.702983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:20.816 [2024-10-16 20:23:35.702992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:16:20.816 [2024-10-16 20:23:35.702997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.703186] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 71d3b1be-a432-41c7-ac06-b432fb201faa 00:16:20.816 [2024-10-16 20:23:35.704209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.704234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:20.816 [2024-10-16 20:23:35.704242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:20.816 [2024-10-16 20:23:35.704249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.709549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.709667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:20.816 [2024-10-16 20:23:35.709679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.228 ms 00:16:20.816 [2024-10-16 20:23:35.709686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.709803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.709813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:20.816 [2024-10-16 20:23:35.709820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:20.816 [2024-10-16 20:23:35.709829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.709861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.709869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:20.816 [2024-10-16 20:23:35.709875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:20.816 [2024-10-16 20:23:35.709881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.709910] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:20.816 [2024-10-16 20:23:35.713060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.713084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:20.816 [2024-10-16 20:23:35.713094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.152 ms 00:16:20.816 [2024-10-16 20:23:35.713100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.713160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.713167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:20.816 [2024-10-16 20:23:35.713174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:20.816 [2024-10-16 20:23:35.713180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.713217] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:20.816 [2024-10-16 20:23:35.713303] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:20.816 [2024-10-16 20:23:35.713315] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:20.816 [2024-10-16 20:23:35.713323] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:20.816 [2024-10-16 20:23:35.713332] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:20.816 [2024-10-16 20:23:35.713339] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:20.816 [2024-10-16 20:23:35.713348] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:20.816 [2024-10-16 20:23:35.713353] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:20.816 [2024-10-16 20:23:35.713361] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:20.816 [2024-10-16 20:23:35.713367] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:20.816 [2024-10-16 20:23:35.713374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.713379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:20.816 [2024-10-16 20:23:35.713386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:16:20.816 [2024-10-16 20:23:35.713392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.713449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.816 [2024-10-16 20:23:35.713455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:20.816 [2024-10-16 20:23:35.713463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:20.816 [2024-10-16 20:23:35.713468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.816 [2024-10-16 20:23:35.713562] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:20.816 [2024-10-16 20:23:35.713569] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:20.816 [2024-10-16 20:23:35.713576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:20.816 [2024-10-16 20:23:35.713582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.816 [2024-10-16 20:23:35.713589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:20.816 [2024-10-16 20:23:35.713594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:20.816 [2024-10-16 20:23:35.713600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:20.816 [2024-10-16 20:23:35.713605] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:20.817 [2024-10-16 20:23:35.713612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:20.817 [2024-10-16 20:23:35.713617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:20.817 [2024-10-16 20:23:35.713623] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:20.817 [2024-10-16 20:23:35.713629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:20.817 [2024-10-16 20:23:35.713635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:20.817 [2024-10-16 20:23:35.713640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:20.817 [2024-10-16 20:23:35.713647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:20.817 [2024-10-16 20:23:35.713652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.817 [2024-10-16 20:23:35.713661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:20.817 [2024-10-16 20:23:35.713666] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:20.817 [2024-10-16 20:23:35.713672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.817 [2024-10-16 20:23:35.713677] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:20.817 [2024-10-16 20:23:35.713684] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:20.817 [2024-10-16 20:23:35.713689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:20.817 [2024-10-16 20:23:35.713695] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:20.817 [2024-10-16 20:23:35.713700] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:20.817 [2024-10-16 20:23:35.713706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:20.817 [2024-10-16 20:23:35.713710] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:20.817 [2024-10-16 20:23:35.713717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:20.817 [2024-10-16 20:23:35.713722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:20.817 [2024-10-16 20:23:35.713728] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:20.817 [2024-10-16 20:23:35.713740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:20.817 [2024-10-16 20:23:35.713747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:20.817 [2024-10-16 20:23:35.713752] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:20.817 [2024-10-16 20:23:35.713760] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:20.817 [2024-10-16 20:23:35.713764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:20.817 [2024-10-16 20:23:35.713770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:20.817 [2024-10-16 20:23:35.713775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:20.817 [2024-10-16 20:23:35.713782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:20.817 [2024-10-16 20:23:35.713787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:20.817 [2024-10-16 20:23:35.713793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:20.817 [2024-10-16 20:23:35.713798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:20.817 [2024-10-16 20:23:35.713804] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:20.817 [2024-10-16 20:23:35.713810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:20.817 [2024-10-16 20:23:35.713817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:20.817 [2024-10-16 20:23:35.713822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.817 [2024-10-16 20:23:35.713830] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:20.817 [2024-10-16 20:23:35.713836] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:20.817 [2024-10-16 20:23:35.713842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:20.817 [2024-10-16 20:23:35.713848] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:20.817 [2024-10-16 20:23:35.713856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:20.817 [2024-10-16 20:23:35.713861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:20.817 [2024-10-16 20:23:35.713870] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:20.817 [2024-10-16 20:23:35.713877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:20.817 [2024-10-16 20:23:35.713885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:20.817 [2024-10-16 20:23:35.713890] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:20.817 [2024-10-16 20:23:35.713897] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:20.817 [2024-10-16 20:23:35.713902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:20.817 [2024-10-16 20:23:35.713909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:20.817 [2024-10-16 20:23:35.713914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:20.817 [2024-10-16 20:23:35.713921] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:20.817 [2024-10-16 20:23:35.713926] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:20.817 [2024-10-16 20:23:35.713932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:20.817 [2024-10-16 20:23:35.713938] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:20.817 [2024-10-16 20:23:35.713944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:20.817 [2024-10-16 20:23:35.713950] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:20.817 [2024-10-16 20:23:35.713959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:20.817 [2024-10-16 20:23:35.713964] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:20.817 [2024-10-16 20:23:35.713972] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:20.817 [2024-10-16 20:23:35.713978] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:20.817 [2024-10-16 20:23:35.713984] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:20.817 [2024-10-16 20:23:35.713990] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:20.817 [2024-10-16 20:23:35.713996] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:20.817 [2024-10-16 20:23:35.714002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.817 [2024-10-16 20:23:35.714009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:20.817 [2024-10-16 20:23:35.714014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:16:20.817 [2024-10-16 20:23:35.714022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.817 [2024-10-16 20:23:35.726575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.817 [2024-10-16 20:23:35.726610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:20.817 [2024-10-16 20:23:35.726618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.445 ms 00:16:20.817 [2024-10-16 20:23:35.726625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.817 [2024-10-16 20:23:35.726727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.817 [2024-10-16 20:23:35.726737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:20.817 [2024-10-16 20:23:35.726745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:20.817 [2024-10-16 20:23:35.726752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.075 [2024-10-16 20:23:35.752574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.076 [2024-10-16 20:23:35.752697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:21.076 [2024-10-16 20:23:35.752710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.789 ms 00:16:21.076 [2024-10-16 20:23:35.752717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.076 [2024-10-16 20:23:35.752767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.076 [2024-10-16 20:23:35.752775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:21.076 [2024-10-16 20:23:35.752782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:21.076 [2024-10-16 20:23:35.752792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.076 [2024-10-16 20:23:35.753133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.076 [2024-10-16 20:23:35.753148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:21.076 [2024-10-16 20:23:35.753154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:16:21.076 [2024-10-16 20:23:35.753161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.076 [2024-10-16 20:23:35.753255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.076 [2024-10-16 20:23:35.753264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:21.076 [2024-10-16 20:23:35.753271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:21.076 [2024-10-16 20:23:35.753278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.076 [2024-10-16 20:23:35.778141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.076 [2024-10-16 20:23:35.778336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:21.076 [2024-10-16 20:23:35.778365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.830 ms 00:16:21.076 [2024-10-16 20:23:35.778380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.076 [2024-10-16 20:23:35.789129] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:21.076 [2024-10-16 20:23:35.802201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.076 [2024-10-16 20:23:35.802230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:21.076 [2024-10-16 20:23:35.802241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.661 ms 00:16:21.076 [2024-10-16 20:23:35.802247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.076 [2024-10-16 20:23:35.874064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.076 [2024-10-16 20:23:35.874094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:21.076 [2024-10-16 20:23:35.874105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.750 ms 00:16:21.076 [2024-10-16 20:23:35.874112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.076 [2024-10-16 20:23:35.874182] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:21.076 [2024-10-16 20:23:35.874192] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:23.604 [2024-10-16 20:23:38.492674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.604 [2024-10-16 20:23:38.492727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:23.604 [2024-10-16 20:23:38.492744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2618.479 ms 00:16:23.604 [2024-10-16 20:23:38.492752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.604 [2024-10-16 20:23:38.492974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.604 [2024-10-16 20:23:38.492988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:23.604 [2024-10-16 20:23:38.492999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:23.604 [2024-10-16 20:23:38.493007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.604 [2024-10-16 20:23:38.517106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.604 [2024-10-16 20:23:38.517139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:23.604 [2024-10-16 20:23:38.517152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.028 ms 00:16:23.604 [2024-10-16 20:23:38.517160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.862 [2024-10-16 20:23:38.540425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.862 [2024-10-16 20:23:38.540455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:23.862 [2024-10-16 20:23:38.540470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.196 ms 00:16:23.862 [2024-10-16 20:23:38.540477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.862 [2024-10-16 20:23:38.540806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.862 [2024-10-16 20:23:38.540816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:23.862 [2024-10-16 20:23:38.540826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:16:23.862 [2024-10-16 20:23:38.540835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.862 [2024-10-16 20:23:38.605648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.862 [2024-10-16 20:23:38.605679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:23.862 [2024-10-16 20:23:38.605692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.771 ms 00:16:23.862 [2024-10-16 20:23:38.605700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.862 [2024-10-16 20:23:38.630152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.862 [2024-10-16 20:23:38.630183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:23.862 [2024-10-16 20:23:38.630195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.378 ms 00:16:23.862 [2024-10-16 20:23:38.630203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.862 [2024-10-16 20:23:38.634534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.862 [2024-10-16 20:23:38.634567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:23.862 [2024-10-16 20:23:38.634580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.263 ms 00:16:23.862 [2024-10-16 20:23:38.634588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.862 [2024-10-16 20:23:38.657414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.862 [2024-10-16 20:23:38.657444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:23.862 [2024-10-16 20:23:38.657456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.758 ms 00:16:23.862 [2024-10-16 20:23:38.657464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.862 [2024-10-16 20:23:38.657525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.862 [2024-10-16 20:23:38.657535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:23.862 [2024-10-16 20:23:38.657545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:23.862 [2024-10-16 20:23:38.657552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.862 [2024-10-16 20:23:38.657640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.862 [2024-10-16 20:23:38.657662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:23.862 [2024-10-16 20:23:38.657672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:23.862 [2024-10-16 20:23:38.657679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.862 [2024-10-16 20:23:38.658472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:23.862 [2024-10-16 20:23:38.661521] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2958.267 ms, result 0 00:16:23.862 [2024-10-16 20:23:38.662361] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:23.862 { 00:16:23.862 "name": "ftl0", 00:16:23.862 "uuid": "71d3b1be-a432-41c7-ac06-b432fb201faa" 00:16:23.862 } 00:16:23.862 20:23:38 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:23.862 20:23:38 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:16:23.862 20:23:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:23.862 20:23:38 -- common/autotest_common.sh@889 -- # local i 00:16:23.862 20:23:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:23.862 20:23:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:23.862 20:23:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:24.120 20:23:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:24.120 [ 00:16:24.120 { 00:16:24.120 "name": "ftl0", 00:16:24.120 "aliases": [ 00:16:24.120 "71d3b1be-a432-41c7-ac06-b432fb201faa" 00:16:24.120 ], 00:16:24.120 "product_name": "FTL disk", 00:16:24.120 "block_size": 4096, 00:16:24.120 "num_blocks": 23592960, 00:16:24.120 "uuid": "71d3b1be-a432-41c7-ac06-b432fb201faa", 00:16:24.120 "assigned_rate_limits": { 00:16:24.120 "rw_ios_per_sec": 0, 00:16:24.120 "rw_mbytes_per_sec": 0, 00:16:24.120 "r_mbytes_per_sec": 0, 00:16:24.120 "w_mbytes_per_sec": 0 00:16:24.120 }, 00:16:24.120 "claimed": false, 00:16:24.120 "zoned": false, 00:16:24.120 "supported_io_types": { 00:16:24.120 "read": true, 00:16:24.120 "write": true, 00:16:24.120 "unmap": true, 00:16:24.120 "write_zeroes": true, 00:16:24.120 "flush": true, 00:16:24.120 "reset": false, 00:16:24.120 "compare": false, 00:16:24.120 "compare_and_write": false, 00:16:24.120 "abort": false, 00:16:24.120 "nvme_admin": false, 00:16:24.120 "nvme_io": false 00:16:24.120 }, 00:16:24.120 "driver_specific": { 00:16:24.120 "ftl": { 00:16:24.120 "base_bdev": "b86ae86f-1cb4-4342-a345-6c298048b801", 00:16:24.120 "cache": "nvc0n1p0" 00:16:24.120 } 00:16:24.120 } 00:16:24.120 } 00:16:24.120 ] 00:16:24.120 20:23:39 -- common/autotest_common.sh@895 -- # return 0 00:16:24.120 20:23:39 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:24.120 20:23:39 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:24.378 20:23:39 -- ftl/trim.sh@56 -- # echo ']}' 00:16:24.378 20:23:39 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:24.637 20:23:39 -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:24.637 { 00:16:24.637 "name": "ftl0", 00:16:24.637 "aliases": [ 00:16:24.637 "71d3b1be-a432-41c7-ac06-b432fb201faa" 00:16:24.637 ], 00:16:24.637 "product_name": "FTL disk", 00:16:24.637 "block_size": 4096, 00:16:24.637 "num_blocks": 23592960, 00:16:24.637 "uuid": "71d3b1be-a432-41c7-ac06-b432fb201faa", 00:16:24.637 "assigned_rate_limits": { 00:16:24.637 "rw_ios_per_sec": 0, 00:16:24.637 "rw_mbytes_per_sec": 0, 00:16:24.637 "r_mbytes_per_sec": 0, 00:16:24.637 "w_mbytes_per_sec": 0 00:16:24.637 }, 00:16:24.637 "claimed": false, 00:16:24.637 "zoned": false, 00:16:24.637 "supported_io_types": { 00:16:24.637 "read": true, 00:16:24.637 "write": true, 00:16:24.637 "unmap": true, 00:16:24.637 "write_zeroes": true, 00:16:24.637 "flush": true, 00:16:24.637 "reset": false, 00:16:24.637 "compare": false, 00:16:24.637 "compare_and_write": false, 00:16:24.637 "abort": false, 00:16:24.637 "nvme_admin": false, 00:16:24.637 "nvme_io": false 00:16:24.637 }, 00:16:24.637 "driver_specific": { 00:16:24.637 "ftl": { 00:16:24.637 "base_bdev": "b86ae86f-1cb4-4342-a345-6c298048b801", 00:16:24.637 "cache": "nvc0n1p0" 00:16:24.637 } 00:16:24.637 } 00:16:24.637 } 00:16:24.637 ]' 00:16:24.637 20:23:39 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:24.637 20:23:39 -- ftl/trim.sh@60 -- # nb=23592960 00:16:24.637 20:23:39 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:24.897 [2024-10-16 20:23:39.602140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.602182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:24.897 [2024-10-16 20:23:39.602195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:24.897 [2024-10-16 20:23:39.602205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.602240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:24.897 [2024-10-16 20:23:39.604733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.604761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:24.897 [2024-10-16 20:23:39.604775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.477 ms 00:16:24.897 [2024-10-16 20:23:39.604783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.605406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.605425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:24.897 [2024-10-16 20:23:39.605439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:16:24.897 [2024-10-16 20:23:39.605446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.609110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.609129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:24.897 [2024-10-16 20:23:39.609143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:16:24.897 [2024-10-16 20:23:39.609152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.616056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.616085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:24.897 [2024-10-16 20:23:39.616097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.862 ms 00:16:24.897 [2024-10-16 20:23:39.616104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.639756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.639884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:24.897 [2024-10-16 20:23:39.639904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.545 ms 00:16:24.897 [2024-10-16 20:23:39.639912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.654922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.655039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:24.897 [2024-10-16 20:23:39.655072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.944 ms 00:16:24.897 [2024-10-16 20:23:39.655080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.655298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.655315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:24.897 [2024-10-16 20:23:39.655329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:24.897 [2024-10-16 20:23:39.655337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.678359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.678463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:24.897 [2024-10-16 20:23:39.678481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.993 ms 00:16:24.897 [2024-10-16 20:23:39.678488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.701604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.701705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:24.897 [2024-10-16 20:23:39.701729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.066 ms 00:16:24.897 [2024-10-16 20:23:39.701736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.724199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.724301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:24.897 [2024-10-16 20:23:39.724318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.405 ms 00:16:24.897 [2024-10-16 20:23:39.724325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.746862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.897 [2024-10-16 20:23:39.746891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:24.897 [2024-10-16 20:23:39.746904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.420 ms 00:16:24.897 [2024-10-16 20:23:39.746911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.897 [2024-10-16 20:23:39.746964] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:24.897 [2024-10-16 20:23:39.746978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.746989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.746997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:24.897 [2024-10-16 20:23:39.747348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:24.898 [2024-10-16 20:23:39.747859] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:24.898 [2024-10-16 20:23:39.747868] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71d3b1be-a432-41c7-ac06-b432fb201faa 00:16:24.898 [2024-10-16 20:23:39.747875] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:24.898 [2024-10-16 20:23:39.747883] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:24.898 [2024-10-16 20:23:39.747890] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:24.898 [2024-10-16 20:23:39.747899] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:24.898 [2024-10-16 20:23:39.747905] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:24.898 [2024-10-16 20:23:39.747914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:24.898 [2024-10-16 20:23:39.747921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:24.898 [2024-10-16 20:23:39.747930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:24.898 [2024-10-16 20:23:39.747936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:24.898 [2024-10-16 20:23:39.747945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.898 [2024-10-16 20:23:39.747953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:24.898 [2024-10-16 20:23:39.748008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:16:24.898 [2024-10-16 20:23:39.748015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.898 [2024-10-16 20:23:39.760173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.898 [2024-10-16 20:23:39.760201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:24.898 [2024-10-16 20:23:39.760212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.100 ms 00:16:24.898 [2024-10-16 20:23:39.760220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.898 [2024-10-16 20:23:39.760439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.898 [2024-10-16 20:23:39.760448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:24.898 [2024-10-16 20:23:39.760457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:16:24.898 [2024-10-16 20:23:39.760464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.898 [2024-10-16 20:23:39.804291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:24.898 [2024-10-16 20:23:39.804322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.898 [2024-10-16 20:23:39.804335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:24.898 [2024-10-16 20:23:39.804342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.898 [2024-10-16 20:23:39.804436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:24.898 [2024-10-16 20:23:39.804445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.898 [2024-10-16 20:23:39.804454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:24.898 [2024-10-16 20:23:39.804461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.898 [2024-10-16 20:23:39.804525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:24.898 [2024-10-16 20:23:39.804534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.898 [2024-10-16 20:23:39.804543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:24.898 [2024-10-16 20:23:39.804550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.898 [2024-10-16 20:23:39.804579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:24.898 [2024-10-16 20:23:39.804588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.898 [2024-10-16 20:23:39.804598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:24.898 [2024-10-16 20:23:39.804604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.156 [2024-10-16 20:23:39.888657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.156 [2024-10-16 20:23:39.888693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.156 [2024-10-16 20:23:39.888707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.156 [2024-10-16 20:23:39.888715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.156 [2024-10-16 20:23:39.917888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.156 [2024-10-16 20:23:39.917919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.156 [2024-10-16 20:23:39.917930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.156 [2024-10-16 20:23:39.917937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.156 [2024-10-16 20:23:39.918005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.156 [2024-10-16 20:23:39.918014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:25.156 [2024-10-16 20:23:39.918024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.156 [2024-10-16 20:23:39.918031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.156 [2024-10-16 20:23:39.918114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.156 [2024-10-16 20:23:39.918123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:25.156 [2024-10-16 20:23:39.918134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.156 [2024-10-16 20:23:39.918155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.156 [2024-10-16 20:23:39.918275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.156 [2024-10-16 20:23:39.918284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:25.156 [2024-10-16 20:23:39.918296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.156 [2024-10-16 20:23:39.918303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.156 [2024-10-16 20:23:39.918353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.156 [2024-10-16 20:23:39.918361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:25.156 [2024-10-16 20:23:39.918372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.156 [2024-10-16 20:23:39.918379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.156 [2024-10-16 20:23:39.918435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.156 [2024-10-16 20:23:39.918451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:25.156 [2024-10-16 20:23:39.918461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.156 [2024-10-16 20:23:39.918468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.156 [2024-10-16 20:23:39.918526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.156 [2024-10-16 20:23:39.918535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:25.156 [2024-10-16 20:23:39.918545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.156 [2024-10-16 20:23:39.918552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.156 [2024-10-16 20:23:39.918741] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 316.581 ms, result 0 00:16:25.156 true 00:16:25.156 20:23:39 -- ftl/trim.sh@63 -- # killprocess 71966 00:16:25.156 20:23:39 -- common/autotest_common.sh@926 -- # '[' -z 71966 ']' 00:16:25.156 20:23:39 -- common/autotest_common.sh@930 -- # kill -0 71966 00:16:25.156 20:23:39 -- common/autotest_common.sh@931 -- # uname 00:16:25.156 20:23:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:25.156 20:23:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71966 00:16:25.156 killing process with pid 71966 00:16:25.156 20:23:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:25.156 20:23:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:25.156 20:23:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71966' 00:16:25.156 20:23:39 -- common/autotest_common.sh@945 -- # kill 71966 00:16:25.156 20:23:39 -- common/autotest_common.sh@950 -- # wait 71966 00:16:31.718 20:23:45 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:31.718 65536+0 records in 00:16:31.718 65536+0 records out 00:16:31.718 268435456 bytes (268 MB, 256 MiB) copied, 1.07698 s, 249 MB/s 00:16:31.718 20:23:46 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:31.980 [2024-10-16 20:23:46.667948] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:31.980 [2024-10-16 20:23:46.668119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72177 ] 00:16:31.980 [2024-10-16 20:23:46.822190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.240 [2024-10-16 20:23:46.998224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.499 [2024-10-16 20:23:47.249508] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:32.499 [2024-10-16 20:23:47.249788] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:32.499 [2024-10-16 20:23:47.403652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.499 [2024-10-16 20:23:47.403817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:32.499 [2024-10-16 20:23:47.403837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:32.499 [2024-10-16 20:23:47.403846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.499 [2024-10-16 20:23:47.406561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.499 [2024-10-16 20:23:47.406599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:32.499 [2024-10-16 20:23:47.406609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.691 ms 00:16:32.499 [2024-10-16 20:23:47.406617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.499 [2024-10-16 20:23:47.406700] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:32.499 [2024-10-16 20:23:47.407523] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:32.499 [2024-10-16 20:23:47.407692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.499 [2024-10-16 20:23:47.407749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:32.499 [2024-10-16 20:23:47.407774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:16:32.500 [2024-10-16 20:23:47.407793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.500 [2024-10-16 20:23:47.408990] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:32.500 [2024-10-16 20:23:47.422084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.500 [2024-10-16 20:23:47.422209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:32.500 [2024-10-16 20:23:47.422266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.095 ms 00:16:32.500 [2024-10-16 20:23:47.422277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.500 [2024-10-16 20:23:47.422628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.500 [2024-10-16 20:23:47.422656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:32.500 [2024-10-16 20:23:47.422666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:32.500 [2024-10-16 20:23:47.422674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.761 [2024-10-16 20:23:47.428106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.761 [2024-10-16 20:23:47.428141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:32.761 [2024-10-16 20:23:47.428151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.387 ms 00:16:32.761 [2024-10-16 20:23:47.428163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.761 [2024-10-16 20:23:47.428260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.761 [2024-10-16 20:23:47.428271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:32.761 [2024-10-16 20:23:47.428279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:32.761 [2024-10-16 20:23:47.428286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.761 [2024-10-16 20:23:47.428313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.761 [2024-10-16 20:23:47.428321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:32.761 [2024-10-16 20:23:47.428328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:32.761 [2024-10-16 20:23:47.428335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.761 [2024-10-16 20:23:47.428364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:32.761 [2024-10-16 20:23:47.431886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.761 [2024-10-16 20:23:47.431915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.761 [2024-10-16 20:23:47.431924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.535 ms 00:16:32.761 [2024-10-16 20:23:47.431933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.761 [2024-10-16 20:23:47.431971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.761 [2024-10-16 20:23:47.431979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:32.761 [2024-10-16 20:23:47.431987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:32.761 [2024-10-16 20:23:47.431994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.761 [2024-10-16 20:23:47.432011] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:32.761 [2024-10-16 20:23:47.432028] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:32.761 [2024-10-16 20:23:47.432070] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:32.761 [2024-10-16 20:23:47.432087] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:32.761 [2024-10-16 20:23:47.432160] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:32.761 [2024-10-16 20:23:47.432169] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:32.761 [2024-10-16 20:23:47.432178] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:32.761 [2024-10-16 20:23:47.432188] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:32.761 [2024-10-16 20:23:47.432196] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:32.761 [2024-10-16 20:23:47.432203] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:32.761 [2024-10-16 20:23:47.432210] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:32.761 [2024-10-16 20:23:47.432217] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:32.761 [2024-10-16 20:23:47.432226] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:32.761 [2024-10-16 20:23:47.432233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.761 [2024-10-16 20:23:47.432240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:32.761 [2024-10-16 20:23:47.432248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:16:32.761 [2024-10-16 20:23:47.432255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.761 [2024-10-16 20:23:47.432329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.761 [2024-10-16 20:23:47.432338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:32.761 [2024-10-16 20:23:47.432345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:32.761 [2024-10-16 20:23:47.432351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.761 [2024-10-16 20:23:47.432427] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:32.761 [2024-10-16 20:23:47.432436] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:32.761 [2024-10-16 20:23:47.432444] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:32.761 [2024-10-16 20:23:47.432452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.761 [2024-10-16 20:23:47.432459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:32.761 [2024-10-16 20:23:47.432466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:32.761 [2024-10-16 20:23:47.432473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:32.761 [2024-10-16 20:23:47.432480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:32.761 [2024-10-16 20:23:47.432489] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:32.761 [2024-10-16 20:23:47.432495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:32.761 [2024-10-16 20:23:47.432502] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:32.761 [2024-10-16 20:23:47.432509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:32.761 [2024-10-16 20:23:47.432515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:32.761 [2024-10-16 20:23:47.432522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:32.761 [2024-10-16 20:23:47.432534] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:32.761 [2024-10-16 20:23:47.432540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.761 [2024-10-16 20:23:47.432546] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:32.761 [2024-10-16 20:23:47.432553] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:32.761 [2024-10-16 20:23:47.432559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.761 [2024-10-16 20:23:47.432565] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:32.761 [2024-10-16 20:23:47.432571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:32.761 [2024-10-16 20:23:47.432578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:32.761 [2024-10-16 20:23:47.432584] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:32.761 [2024-10-16 20:23:47.432590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:32.761 [2024-10-16 20:23:47.432596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:32.761 [2024-10-16 20:23:47.432602] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:32.761 [2024-10-16 20:23:47.432608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:32.761 [2024-10-16 20:23:47.432615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:32.761 [2024-10-16 20:23:47.432621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:32.761 [2024-10-16 20:23:47.432627] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:32.761 [2024-10-16 20:23:47.432633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:32.761 [2024-10-16 20:23:47.432639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:32.761 [2024-10-16 20:23:47.432646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:32.761 [2024-10-16 20:23:47.432652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:32.761 [2024-10-16 20:23:47.432658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:32.762 [2024-10-16 20:23:47.432663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:32.762 [2024-10-16 20:23:47.432670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:32.762 [2024-10-16 20:23:47.432676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:32.762 [2024-10-16 20:23:47.432683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:32.762 [2024-10-16 20:23:47.432689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:32.762 [2024-10-16 20:23:47.432695] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:32.762 [2024-10-16 20:23:47.432702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:32.762 [2024-10-16 20:23:47.432710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:32.762 [2024-10-16 20:23:47.432721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.762 [2024-10-16 20:23:47.432728] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:32.762 [2024-10-16 20:23:47.432734] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:32.762 [2024-10-16 20:23:47.432740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:32.762 [2024-10-16 20:23:47.432747] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:32.762 [2024-10-16 20:23:47.432753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:32.762 [2024-10-16 20:23:47.432759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:32.762 [2024-10-16 20:23:47.432766] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:32.762 [2024-10-16 20:23:47.432775] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:32.762 [2024-10-16 20:23:47.432783] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:32.762 [2024-10-16 20:23:47.432789] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:32.762 [2024-10-16 20:23:47.432796] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:32.762 [2024-10-16 20:23:47.432803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:32.762 [2024-10-16 20:23:47.432809] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:32.762 [2024-10-16 20:23:47.432816] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:32.762 [2024-10-16 20:23:47.432823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:32.762 [2024-10-16 20:23:47.432830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:32.762 [2024-10-16 20:23:47.432836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:32.762 [2024-10-16 20:23:47.432843] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:32.762 [2024-10-16 20:23:47.432849] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:32.762 [2024-10-16 20:23:47.432857] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:32.762 [2024-10-16 20:23:47.432864] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:32.762 [2024-10-16 20:23:47.432871] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:32.762 [2024-10-16 20:23:47.432882] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:32.762 [2024-10-16 20:23:47.432890] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:32.762 [2024-10-16 20:23:47.432897] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:32.762 [2024-10-16 20:23:47.432904] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:32.762 [2024-10-16 20:23:47.432911] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:32.762 [2024-10-16 20:23:47.432918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.432925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:32.762 [2024-10-16 20:23:47.432933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:16:32.762 [2024-10-16 20:23:47.432939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.448950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.449106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:32.762 [2024-10-16 20:23:47.449483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.956 ms 00:16:32.762 [2024-10-16 20:23:47.449530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.449735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.449765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:32.762 [2024-10-16 20:23:47.449787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:32.762 [2024-10-16 20:23:47.449805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.493886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.494067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:32.762 [2024-10-16 20:23:47.494137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.046 ms 00:16:32.762 [2024-10-16 20:23:47.494150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.494230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.494241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:32.762 [2024-10-16 20:23:47.494255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:32.762 [2024-10-16 20:23:47.494263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.494698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.494717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:32.762 [2024-10-16 20:23:47.494726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:16:32.762 [2024-10-16 20:23:47.494734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.494865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.494875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:32.762 [2024-10-16 20:23:47.494883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:32.762 [2024-10-16 20:23:47.494891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.510896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.510941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:32.762 [2024-10-16 20:23:47.510952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.980 ms 00:16:32.762 [2024-10-16 20:23:47.510963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.524790] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:32.762 [2024-10-16 20:23:47.524991] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:32.762 [2024-10-16 20:23:47.525011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.525021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:32.762 [2024-10-16 20:23:47.525031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.904 ms 00:16:32.762 [2024-10-16 20:23:47.525038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.551554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.551737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:32.762 [2024-10-16 20:23:47.551770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.256 ms 00:16:32.762 [2024-10-16 20:23:47.551778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.565381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.565432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:32.762 [2024-10-16 20:23:47.565444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.509 ms 00:16:32.762 [2024-10-16 20:23:47.565461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.578308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.578491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:32.762 [2024-10-16 20:23:47.578512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.753 ms 00:16:32.762 [2024-10-16 20:23:47.578520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.578926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.578939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:32.762 [2024-10-16 20:23:47.578950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:16:32.762 [2024-10-16 20:23:47.578957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.641183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.641325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:32.762 [2024-10-16 20:23:47.641343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.202 ms 00:16:32.762 [2024-10-16 20:23:47.641351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.651766] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:32.762 [2024-10-16 20:23:47.665585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.665618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:32.762 [2024-10-16 20:23:47.665628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.163 ms 00:16:32.762 [2024-10-16 20:23:47.665635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.762 [2024-10-16 20:23:47.665694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.762 [2024-10-16 20:23:47.665711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:32.763 [2024-10-16 20:23:47.665719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:32.763 [2024-10-16 20:23:47.665729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.763 [2024-10-16 20:23:47.665776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.763 [2024-10-16 20:23:47.665787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:32.763 [2024-10-16 20:23:47.665794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:32.763 [2024-10-16 20:23:47.665802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.763 [2024-10-16 20:23:47.666976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.763 [2024-10-16 20:23:47.667008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:32.763 [2024-10-16 20:23:47.667017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:16:32.763 [2024-10-16 20:23:47.667024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.763 [2024-10-16 20:23:47.667067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.763 [2024-10-16 20:23:47.667076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:32.763 [2024-10-16 20:23:47.667087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:32.763 [2024-10-16 20:23:47.667093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.763 [2024-10-16 20:23:47.667123] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:32.763 [2024-10-16 20:23:47.667133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.763 [2024-10-16 20:23:47.667140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:32.763 [2024-10-16 20:23:47.667148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:32.763 [2024-10-16 20:23:47.667154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.024 [2024-10-16 20:23:47.691027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.024 [2024-10-16 20:23:47.691072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:33.024 [2024-10-16 20:23:47.691083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.851 ms 00:16:33.024 [2024-10-16 20:23:47.691090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.024 [2024-10-16 20:23:47.691174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.024 [2024-10-16 20:23:47.691183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:33.024 [2024-10-16 20:23:47.691191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:33.024 [2024-10-16 20:23:47.691199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.024 [2024-10-16 20:23:47.692959] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:33.024 [2024-10-16 20:23:47.696295] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 289.025 ms, result 0 00:16:33.024 [2024-10-16 20:23:47.697203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:33.024 [2024-10-16 20:23:47.710640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:33.967  [2024-10-16T20:23:49.839Z] Copying: 19/256 [MB] (19 MBps) [2024-10-16T20:23:50.782Z] Copying: 38/256 [MB] (19 MBps) [2024-10-16T20:23:51.724Z] Copying: 65/256 [MB] (26 MBps) [2024-10-16T20:23:53.159Z] Copying: 88/256 [MB] (23 MBps) [2024-10-16T20:23:53.731Z] Copying: 104/256 [MB] (15 MBps) [2024-10-16T20:23:55.117Z] Copying: 122/256 [MB] (17 MBps) [2024-10-16T20:23:56.065Z] Copying: 150/256 [MB] (28 MBps) [2024-10-16T20:23:57.009Z] Copying: 179/256 [MB] (28 MBps) [2024-10-16T20:23:57.953Z] Copying: 199/256 [MB] (19 MBps) [2024-10-16T20:23:58.897Z] Copying: 227/256 [MB] (28 MBps) [2024-10-16T20:23:59.159Z] Copying: 252/256 [MB] (24 MBps) [2024-10-16T20:23:59.159Z] Copying: 256/256 [MB] (average 22 MBps)[2024-10-16 20:23:58.969509] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:44.230 [2024-10-16 20:23:58.976739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.230 [2024-10-16 20:23:58.976876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:44.230 [2024-10-16 20:23:58.976898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:44.230 [2024-10-16 20:23:58.976905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.230 [2024-10-16 20:23:58.976925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:44.230 [2024-10-16 20:23:58.978947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.230 [2024-10-16 20:23:58.978972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:44.230 [2024-10-16 20:23:58.978980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.012 ms 00:16:44.230 [2024-10-16 20:23:58.978986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.230 [2024-10-16 20:23:58.980853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.230 [2024-10-16 20:23:58.980878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:44.230 [2024-10-16 20:23:58.980886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.849 ms 00:16:44.230 [2024-10-16 20:23:58.980891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.230 [2024-10-16 20:23:58.987304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.230 [2024-10-16 20:23:58.987557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:44.230 [2024-10-16 20:23:58.987570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.396 ms 00:16:44.230 [2024-10-16 20:23:58.987576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.230 [2024-10-16 20:23:58.992843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.230 [2024-10-16 20:23:58.992867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:44.230 [2024-10-16 20:23:58.992875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.224 ms 00:16:44.230 [2024-10-16 20:23:58.992880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.230 [2024-10-16 20:23:59.010828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.230 [2024-10-16 20:23:59.010926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:44.230 [2024-10-16 20:23:59.010938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.911 ms 00:16:44.231 [2024-10-16 20:23:59.010944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.231 [2024-10-16 20:23:59.022776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.231 [2024-10-16 20:23:59.022805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:44.231 [2024-10-16 20:23:59.022814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.799 ms 00:16:44.231 [2024-10-16 20:23:59.022820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.231 [2024-10-16 20:23:59.022922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.231 [2024-10-16 20:23:59.022929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:44.231 [2024-10-16 20:23:59.022935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:44.231 [2024-10-16 20:23:59.022941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.231 [2024-10-16 20:23:59.041765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.231 [2024-10-16 20:23:59.041864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:44.231 [2024-10-16 20:23:59.041876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.813 ms 00:16:44.231 [2024-10-16 20:23:59.041882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.231 [2024-10-16 20:23:59.059873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.231 [2024-10-16 20:23:59.059898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:44.231 [2024-10-16 20:23:59.059906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.960 ms 00:16:44.231 [2024-10-16 20:23:59.059912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.231 [2024-10-16 20:23:59.077442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.231 [2024-10-16 20:23:59.077468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:44.231 [2024-10-16 20:23:59.077476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.495 ms 00:16:44.231 [2024-10-16 20:23:59.077481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.231 [2024-10-16 20:23:59.094654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.231 [2024-10-16 20:23:59.094676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:44.231 [2024-10-16 20:23:59.094684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.118 ms 00:16:44.231 [2024-10-16 20:23:59.094690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.231 [2024-10-16 20:23:59.094724] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:44.231 [2024-10-16 20:23:59.094736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.094999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:44.231 [2024-10-16 20:23:59.095155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:44.232 [2024-10-16 20:23:59.095320] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:44.232 [2024-10-16 20:23:59.095326] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71d3b1be-a432-41c7-ac06-b432fb201faa 00:16:44.232 [2024-10-16 20:23:59.095332] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:44.232 [2024-10-16 20:23:59.095337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:44.232 [2024-10-16 20:23:59.095343] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:44.232 [2024-10-16 20:23:59.095349] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:44.232 [2024-10-16 20:23:59.095354] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:44.232 [2024-10-16 20:23:59.095359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:44.232 [2024-10-16 20:23:59.095367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:44.232 [2024-10-16 20:23:59.095372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:44.232 [2024-10-16 20:23:59.095376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:44.232 [2024-10-16 20:23:59.095382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.232 [2024-10-16 20:23:59.095387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:44.232 [2024-10-16 20:23:59.095393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:16:44.232 [2024-10-16 20:23:59.095399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.232 [2024-10-16 20:23:59.104867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.232 [2024-10-16 20:23:59.104888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:44.232 [2024-10-16 20:23:59.104896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.455 ms 00:16:44.232 [2024-10-16 20:23:59.104906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.232 [2024-10-16 20:23:59.105078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.232 [2024-10-16 20:23:59.105086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:44.232 [2024-10-16 20:23:59.105093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:16:44.232 [2024-10-16 20:23:59.105098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.232 [2024-10-16 20:23:59.134645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.232 [2024-10-16 20:23:59.134671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.232 [2024-10-16 20:23:59.134678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.232 [2024-10-16 20:23:59.134687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.232 [2024-10-16 20:23:59.134748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.232 [2024-10-16 20:23:59.134754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.232 [2024-10-16 20:23:59.134760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.232 [2024-10-16 20:23:59.134765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.232 [2024-10-16 20:23:59.134796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.232 [2024-10-16 20:23:59.134803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.232 [2024-10-16 20:23:59.134809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.232 [2024-10-16 20:23:59.134814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.232 [2024-10-16 20:23:59.134830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.232 [2024-10-16 20:23:59.134836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.232 [2024-10-16 20:23:59.134841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.232 [2024-10-16 20:23:59.134847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.493 [2024-10-16 20:23:59.191964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.493 [2024-10-16 20:23:59.191993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.493 [2024-10-16 20:23:59.192001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.493 [2024-10-16 20:23:59.192009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.493 [2024-10-16 20:23:59.214617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.493 [2024-10-16 20:23:59.214643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.493 [2024-10-16 20:23:59.214651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.493 [2024-10-16 20:23:59.214657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.493 [2024-10-16 20:23:59.214699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.493 [2024-10-16 20:23:59.214706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.493 [2024-10-16 20:23:59.214712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.493 [2024-10-16 20:23:59.214718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.493 [2024-10-16 20:23:59.214741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.493 [2024-10-16 20:23:59.214750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.493 [2024-10-16 20:23:59.214756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.493 [2024-10-16 20:23:59.214762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.493 [2024-10-16 20:23:59.214832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.493 [2024-10-16 20:23:59.214839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.493 [2024-10-16 20:23:59.214846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.493 [2024-10-16 20:23:59.214851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.493 [2024-10-16 20:23:59.214877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.493 [2024-10-16 20:23:59.214886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:44.493 [2024-10-16 20:23:59.214891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.493 [2024-10-16 20:23:59.214897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.493 [2024-10-16 20:23:59.214925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.493 [2024-10-16 20:23:59.214932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.493 [2024-10-16 20:23:59.214938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.493 [2024-10-16 20:23:59.214943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.493 [2024-10-16 20:23:59.214976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.493 [2024-10-16 20:23:59.214985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.493 [2024-10-16 20:23:59.214993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.493 [2024-10-16 20:23:59.214998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.493 [2024-10-16 20:23:59.215121] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 238.385 ms, result 0 00:16:45.436 00:16:45.436 00:16:45.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.436 20:24:00 -- ftl/trim.sh@72 -- # svcpid=72325 00:16:45.436 20:24:00 -- ftl/trim.sh@73 -- # waitforlisten 72325 00:16:45.436 20:24:00 -- common/autotest_common.sh@819 -- # '[' -z 72325 ']' 00:16:45.436 20:24:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.436 20:24:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:45.436 20:24:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.436 20:24:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:45.436 20:24:00 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:45.436 20:24:00 -- common/autotest_common.sh@10 -- # set +x 00:16:45.436 [2024-10-16 20:24:00.255115] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:45.436 [2024-10-16 20:24:00.255281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72325 ] 00:16:45.697 [2024-10-16 20:24:00.398387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.697 [2024-10-16 20:24:00.535909] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:45.697 [2024-10-16 20:24:00.536104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.273 20:24:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:46.273 20:24:01 -- common/autotest_common.sh@852 -- # return 0 00:16:46.273 20:24:01 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:46.535 [2024-10-16 20:24:01.207370] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:46.535 [2024-10-16 20:24:01.207418] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:46.535 [2024-10-16 20:24:01.374389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.535 [2024-10-16 20:24:01.374438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:46.535 [2024-10-16 20:24:01.374454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:46.535 [2024-10-16 20:24:01.374463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.535 [2024-10-16 20:24:01.377127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.535 [2024-10-16 20:24:01.377164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:46.535 [2024-10-16 20:24:01.377177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.644 ms 00:16:46.535 [2024-10-16 20:24:01.377186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.535 [2024-10-16 20:24:01.377262] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:46.535 [2024-10-16 20:24:01.377978] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:46.535 [2024-10-16 20:24:01.378003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.535 [2024-10-16 20:24:01.378012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:46.535 [2024-10-16 20:24:01.378025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:16:46.535 [2024-10-16 20:24:01.378033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.535 [2024-10-16 20:24:01.379365] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:46.535 [2024-10-16 20:24:01.392568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.535 [2024-10-16 20:24:01.392609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:46.535 [2024-10-16 20:24:01.392622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.210 ms 00:16:46.535 [2024-10-16 20:24:01.392632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.535 [2024-10-16 20:24:01.392712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.535 [2024-10-16 20:24:01.392725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:46.535 [2024-10-16 20:24:01.392735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:46.535 [2024-10-16 20:24:01.392745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.535 [2024-10-16 20:24:01.398637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.535 [2024-10-16 20:24:01.398679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:46.535 [2024-10-16 20:24:01.398689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.837 ms 00:16:46.535 [2024-10-16 20:24:01.398699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.535 [2024-10-16 20:24:01.398786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.535 [2024-10-16 20:24:01.398798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:46.535 [2024-10-16 20:24:01.398807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:46.535 [2024-10-16 20:24:01.398817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.535 [2024-10-16 20:24:01.398845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.535 [2024-10-16 20:24:01.398856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:46.535 [2024-10-16 20:24:01.398864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:46.535 [2024-10-16 20:24:01.398876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.535 [2024-10-16 20:24:01.398906] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:46.535 [2024-10-16 20:24:01.402572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.535 [2024-10-16 20:24:01.402601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:46.535 [2024-10-16 20:24:01.402613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:16:46.535 [2024-10-16 20:24:01.402621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.535 [2024-10-16 20:24:01.402664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.535 [2024-10-16 20:24:01.402673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:46.535 [2024-10-16 20:24:01.402684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:46.535 [2024-10-16 20:24:01.402695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.535 [2024-10-16 20:24:01.402725] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:46.535 [2024-10-16 20:24:01.402744] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:46.535 [2024-10-16 20:24:01.402780] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:46.535 [2024-10-16 20:24:01.402796] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:46.535 [2024-10-16 20:24:01.402873] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:46.535 [2024-10-16 20:24:01.402885] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:46.535 [2024-10-16 20:24:01.402901] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:46.535 [2024-10-16 20:24:01.402912] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:46.535 [2024-10-16 20:24:01.402923] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:46.535 [2024-10-16 20:24:01.402932] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:46.535 [2024-10-16 20:24:01.402942] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:46.536 [2024-10-16 20:24:01.402950] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:46.536 [2024-10-16 20:24:01.402961] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:46.536 [2024-10-16 20:24:01.402970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.536 [2024-10-16 20:24:01.402980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:46.536 [2024-10-16 20:24:01.402989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:16:46.536 [2024-10-16 20:24:01.402999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.536 [2024-10-16 20:24:01.403095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.536 [2024-10-16 20:24:01.403108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:46.536 [2024-10-16 20:24:01.403117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:46.536 [2024-10-16 20:24:01.403127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.536 [2024-10-16 20:24:01.403203] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:46.536 [2024-10-16 20:24:01.403215] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:46.536 [2024-10-16 20:24:01.403224] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:46.536 [2024-10-16 20:24:01.403235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:46.536 [2024-10-16 20:24:01.403253] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:46.536 [2024-10-16 20:24:01.403273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:46.536 [2024-10-16 20:24:01.403281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:46.536 [2024-10-16 20:24:01.403298] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:46.536 [2024-10-16 20:24:01.403307] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:46.536 [2024-10-16 20:24:01.403315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:46.536 [2024-10-16 20:24:01.403324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:46.536 [2024-10-16 20:24:01.403333] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:46.536 [2024-10-16 20:24:01.403342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:46.536 [2024-10-16 20:24:01.403360] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:46.536 [2024-10-16 20:24:01.403367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:46.536 [2024-10-16 20:24:01.403384] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:46.536 [2024-10-16 20:24:01.403394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:46.536 [2024-10-16 20:24:01.403401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:46.536 [2024-10-16 20:24:01.403412] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:46.536 [2024-10-16 20:24:01.403436] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:46.536 [2024-10-16 20:24:01.403443] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:46.536 [2024-10-16 20:24:01.403460] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:46.536 [2024-10-16 20:24:01.403469] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:46.536 [2024-10-16 20:24:01.403486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:46.536 [2024-10-16 20:24:01.403494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:46.536 [2024-10-16 20:24:01.403510] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:46.536 [2024-10-16 20:24:01.403519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:46.536 [2024-10-16 20:24:01.403536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:46.536 [2024-10-16 20:24:01.403543] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:46.536 [2024-10-16 20:24:01.403554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:46.536 [2024-10-16 20:24:01.403560] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:46.536 [2024-10-16 20:24:01.403573] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:46.536 [2024-10-16 20:24:01.403581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:46.536 [2024-10-16 20:24:01.403591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.536 [2024-10-16 20:24:01.403599] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:46.536 [2024-10-16 20:24:01.403608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:46.536 [2024-10-16 20:24:01.403618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:46.536 [2024-10-16 20:24:01.403628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:46.536 [2024-10-16 20:24:01.403636] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:46.536 [2024-10-16 20:24:01.403646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:46.536 [2024-10-16 20:24:01.403654] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:46.536 [2024-10-16 20:24:01.403666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:46.536 [2024-10-16 20:24:01.403675] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:46.536 [2024-10-16 20:24:01.403685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:46.536 [2024-10-16 20:24:01.403693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:46.536 [2024-10-16 20:24:01.403706] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:46.536 [2024-10-16 20:24:01.403715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:46.536 [2024-10-16 20:24:01.403724] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:46.536 [2024-10-16 20:24:01.403732] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:46.536 [2024-10-16 20:24:01.403742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:46.536 [2024-10-16 20:24:01.403750] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:46.536 [2024-10-16 20:24:01.403759] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:46.536 [2024-10-16 20:24:01.403768] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:46.536 [2024-10-16 20:24:01.403778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:46.536 [2024-10-16 20:24:01.403787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:46.536 [2024-10-16 20:24:01.403796] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:46.536 [2024-10-16 20:24:01.403805] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:46.536 [2024-10-16 20:24:01.403816] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:46.536 [2024-10-16 20:24:01.403824] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:46.536 [2024-10-16 20:24:01.403834] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:46.536 [2024-10-16 20:24:01.403843] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:46.536 [2024-10-16 20:24:01.403855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.536 [2024-10-16 20:24:01.403863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:46.536 [2024-10-16 20:24:01.403873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:16:46.536 [2024-10-16 20:24:01.403881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.536 [2024-10-16 20:24:01.420104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.536 [2024-10-16 20:24:01.420146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:46.536 [2024-10-16 20:24:01.420164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.171 ms 00:16:46.536 [2024-10-16 20:24:01.420176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.536 [2024-10-16 20:24:01.420300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.536 [2024-10-16 20:24:01.420311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:46.536 [2024-10-16 20:24:01.420324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:46.536 [2024-10-16 20:24:01.420333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.537 [2024-10-16 20:24:01.454237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.537 [2024-10-16 20:24:01.454281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:46.537 [2024-10-16 20:24:01.454294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.878 ms 00:16:46.537 [2024-10-16 20:24:01.454303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.537 [2024-10-16 20:24:01.454376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.537 [2024-10-16 20:24:01.454388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:46.537 [2024-10-16 20:24:01.454400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:46.537 [2024-10-16 20:24:01.454409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.537 [2024-10-16 20:24:01.454957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.537 [2024-10-16 20:24:01.454991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:46.537 [2024-10-16 20:24:01.455007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:16:46.537 [2024-10-16 20:24:01.455016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.537 [2024-10-16 20:24:01.455167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.537 [2024-10-16 20:24:01.455179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:46.537 [2024-10-16 20:24:01.455194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:16:46.537 [2024-10-16 20:24:01.455204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.473128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.473170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:46.799 [2024-10-16 20:24:01.473186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.898 ms 00:16:46.799 [2024-10-16 20:24:01.473195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.487659] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:46.799 [2024-10-16 20:24:01.487704] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:46.799 [2024-10-16 20:24:01.487719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.487729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:46.799 [2024-10-16 20:24:01.487741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.405 ms 00:16:46.799 [2024-10-16 20:24:01.487750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.513522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.513571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:46.799 [2024-10-16 20:24:01.513586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.674 ms 00:16:46.799 [2024-10-16 20:24:01.513595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.526631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.526686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:46.799 [2024-10-16 20:24:01.526700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.942 ms 00:16:46.799 [2024-10-16 20:24:01.526709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.539703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.539746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:46.799 [2024-10-16 20:24:01.539763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.907 ms 00:16:46.799 [2024-10-16 20:24:01.539771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.540197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.540220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:46.799 [2024-10-16 20:24:01.540237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:16:46.799 [2024-10-16 20:24:01.540247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.608308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.608364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:46.799 [2024-10-16 20:24:01.608387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.033 ms 00:16:46.799 [2024-10-16 20:24:01.608396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.619637] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:46.799 [2024-10-16 20:24:01.638345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.638402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:46.799 [2024-10-16 20:24:01.638416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.851 ms 00:16:46.799 [2024-10-16 20:24:01.638426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.638503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.638519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:46.799 [2024-10-16 20:24:01.638529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:46.799 [2024-10-16 20:24:01.638543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.638598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.638611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:46.799 [2024-10-16 20:24:01.638620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:46.799 [2024-10-16 20:24:01.638631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.640004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.640073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:46.799 [2024-10-16 20:24:01.640086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:16:46.799 [2024-10-16 20:24:01.640096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.640137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.640151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:46.799 [2024-10-16 20:24:01.640161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:46.799 [2024-10-16 20:24:01.640171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.640212] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:46.799 [2024-10-16 20:24:01.640228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.640237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:46.799 [2024-10-16 20:24:01.640248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:46.799 [2024-10-16 20:24:01.640256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.666282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.666329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:46.799 [2024-10-16 20:24:01.666345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.995 ms 00:16:46.799 [2024-10-16 20:24:01.666354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.666467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.799 [2024-10-16 20:24:01.666479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:46.799 [2024-10-16 20:24:01.666492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:46.799 [2024-10-16 20:24:01.666504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.799 [2024-10-16 20:24:01.667932] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:46.799 [2024-10-16 20:24:01.671593] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 293.206 ms, result 0 00:16:46.799 [2024-10-16 20:24:01.673845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:46.799 Some configs were skipped because the RPC state that can call them passed over. 00:16:46.799 20:24:01 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:47.060 [2024-10-16 20:24:01.926825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.060 [2024-10-16 20:24:01.926891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:47.060 [2024-10-16 20:24:01.926905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.447 ms 00:16:47.060 [2024-10-16 20:24:01.926918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.060 [2024-10-16 20:24:01.926960] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 27.585 ms, result 0 00:16:47.060 true 00:16:47.060 20:24:01 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:47.321 [2024-10-16 20:24:02.141711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.321 [2024-10-16 20:24:02.141743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:47.322 [2024-10-16 20:24:02.141754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.578 ms 00:16:47.322 [2024-10-16 20:24:02.141760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.322 [2024-10-16 20:24:02.141791] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.658 ms, result 0 00:16:47.322 true 00:16:47.322 20:24:02 -- ftl/trim.sh@81 -- # killprocess 72325 00:16:47.322 20:24:02 -- common/autotest_common.sh@926 -- # '[' -z 72325 ']' 00:16:47.322 20:24:02 -- common/autotest_common.sh@930 -- # kill -0 72325 00:16:47.322 20:24:02 -- common/autotest_common.sh@931 -- # uname 00:16:47.322 20:24:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:47.322 20:24:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72325 00:16:47.322 20:24:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:47.322 20:24:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:47.322 20:24:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72325' 00:16:47.322 killing process with pid 72325 00:16:47.322 20:24:02 -- common/autotest_common.sh@945 -- # kill 72325 00:16:47.322 20:24:02 -- common/autotest_common.sh@950 -- # wait 72325 00:16:47.895 [2024-10-16 20:24:02.702318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.895 [2024-10-16 20:24:02.702369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:47.895 [2024-10-16 20:24:02.702380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:47.895 [2024-10-16 20:24:02.702388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.702407] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:47.896 [2024-10-16 20:24:02.704536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.704562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:47.896 [2024-10-16 20:24:02.704574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.114 ms 00:16:47.896 [2024-10-16 20:24:02.704581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.704812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.704830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:47.896 [2024-10-16 20:24:02.704838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:16:47.896 [2024-10-16 20:24:02.704845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.708071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.708096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:47.896 [2024-10-16 20:24:02.708108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:16:47.896 [2024-10-16 20:24:02.708114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.713418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.713450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:47.896 [2024-10-16 20:24:02.713459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.275 ms 00:16:47.896 [2024-10-16 20:24:02.713466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.721103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.721129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:47.896 [2024-10-16 20:24:02.721139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.593 ms 00:16:47.896 [2024-10-16 20:24:02.721145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.727629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.727659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:47.896 [2024-10-16 20:24:02.727669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.452 ms 00:16:47.896 [2024-10-16 20:24:02.727676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.727775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.727784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:47.896 [2024-10-16 20:24:02.727792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:47.896 [2024-10-16 20:24:02.727798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.735783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.735809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:47.896 [2024-10-16 20:24:02.735818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.968 ms 00:16:47.896 [2024-10-16 20:24:02.735824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.743092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.743118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:47.896 [2024-10-16 20:24:02.743130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.237 ms 00:16:47.896 [2024-10-16 20:24:02.743136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.750402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.750427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:47.896 [2024-10-16 20:24:02.750437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.236 ms 00:16:47.896 [2024-10-16 20:24:02.750443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.757441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.896 [2024-10-16 20:24:02.757467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:47.896 [2024-10-16 20:24:02.757476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.947 ms 00:16:47.896 [2024-10-16 20:24:02.757482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.896 [2024-10-16 20:24:02.757515] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:47.896 [2024-10-16 20:24:02.757527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:47.896 [2024-10-16 20:24:02.757853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.757999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:47.897 [2024-10-16 20:24:02.758270] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:47.897 [2024-10-16 20:24:02.758279] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71d3b1be-a432-41c7-ac06-b432fb201faa 00:16:47.897 [2024-10-16 20:24:02.758286] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:47.897 [2024-10-16 20:24:02.758293] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:47.897 [2024-10-16 20:24:02.758300] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:47.897 [2024-10-16 20:24:02.758307] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:47.897 [2024-10-16 20:24:02.758313] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:47.897 [2024-10-16 20:24:02.758321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:47.897 [2024-10-16 20:24:02.758327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:47.897 [2024-10-16 20:24:02.758334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:47.897 [2024-10-16 20:24:02.758339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:47.897 [2024-10-16 20:24:02.758347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.897 [2024-10-16 20:24:02.758353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:47.897 [2024-10-16 20:24:02.758361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:16:47.897 [2024-10-16 20:24:02.758369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.897 [2024-10-16 20:24:02.768057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.897 [2024-10-16 20:24:02.768083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:47.897 [2024-10-16 20:24:02.768094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.671 ms 00:16:47.897 [2024-10-16 20:24:02.768100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.897 [2024-10-16 20:24:02.768264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.897 [2024-10-16 20:24:02.768281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:47.897 [2024-10-16 20:24:02.768291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:16:47.897 [2024-10-16 20:24:02.768297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.897 [2024-10-16 20:24:02.803247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.897 [2024-10-16 20:24:02.803275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.897 [2024-10-16 20:24:02.803285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.897 [2024-10-16 20:24:02.803291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.897 [2024-10-16 20:24:02.803350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.897 [2024-10-16 20:24:02.803357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.897 [2024-10-16 20:24:02.803367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.897 [2024-10-16 20:24:02.803373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.897 [2024-10-16 20:24:02.803405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.897 [2024-10-16 20:24:02.803413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.897 [2024-10-16 20:24:02.803423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.897 [2024-10-16 20:24:02.803430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.897 [2024-10-16 20:24:02.803445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.898 [2024-10-16 20:24:02.803452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.898 [2024-10-16 20:24:02.803459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.898 [2024-10-16 20:24:02.803467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.159 [2024-10-16 20:24:02.863465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.159 [2024-10-16 20:24:02.863499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:48.159 [2024-10-16 20:24:02.863509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.159 [2024-10-16 20:24:02.863516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.159 [2024-10-16 20:24:02.885936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.159 [2024-10-16 20:24:02.885967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:48.160 [2024-10-16 20:24:02.885976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.160 [2024-10-16 20:24:02.885985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.160 [2024-10-16 20:24:02.886027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.160 [2024-10-16 20:24:02.886036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:48.160 [2024-10-16 20:24:02.886058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.160 [2024-10-16 20:24:02.886065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.160 [2024-10-16 20:24:02.886092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.160 [2024-10-16 20:24:02.886099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:48.160 [2024-10-16 20:24:02.886106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.160 [2024-10-16 20:24:02.886113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.160 [2024-10-16 20:24:02.886188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.160 [2024-10-16 20:24:02.886197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:48.160 [2024-10-16 20:24:02.886205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.160 [2024-10-16 20:24:02.886211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.160 [2024-10-16 20:24:02.886237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.160 [2024-10-16 20:24:02.886244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:48.160 [2024-10-16 20:24:02.886252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.160 [2024-10-16 20:24:02.886258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.160 [2024-10-16 20:24:02.886289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.160 [2024-10-16 20:24:02.886296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:48.160 [2024-10-16 20:24:02.886305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.160 [2024-10-16 20:24:02.886311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.160 [2024-10-16 20:24:02.886347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.160 [2024-10-16 20:24:02.886354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:48.160 [2024-10-16 20:24:02.886362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.160 [2024-10-16 20:24:02.886368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.160 [2024-10-16 20:24:02.886472] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 184.138 ms, result 0 00:16:48.732 20:24:03 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:48.733 20:24:03 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:48.733 [2024-10-16 20:24:03.567143] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:48.733 [2024-10-16 20:24:03.567256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72372 ] 00:16:48.993 [2024-10-16 20:24:03.714646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.993 [2024-10-16 20:24:03.857504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.254 [2024-10-16 20:24:04.060756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:49.254 [2024-10-16 20:24:04.060807] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:49.517 [2024-10-16 20:24:04.201645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.201695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:49.517 [2024-10-16 20:24:04.201705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:49.517 [2024-10-16 20:24:04.201711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.203715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.203746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.517 [2024-10-16 20:24:04.203754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.993 ms 00:16:49.517 [2024-10-16 20:24:04.203759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.203813] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:49.517 [2024-10-16 20:24:04.204370] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:49.517 [2024-10-16 20:24:04.204390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.204396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.517 [2024-10-16 20:24:04.204402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:16:49.517 [2024-10-16 20:24:04.204408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.205391] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:49.517 [2024-10-16 20:24:04.214793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.214822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:49.517 [2024-10-16 20:24:04.214831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.403 ms 00:16:49.517 [2024-10-16 20:24:04.214837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.214900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.214909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:49.517 [2024-10-16 20:24:04.214915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:49.517 [2024-10-16 20:24:04.214920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.219194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.219219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.517 [2024-10-16 20:24:04.219226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.244 ms 00:16:49.517 [2024-10-16 20:24:04.219235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.219313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.219322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.517 [2024-10-16 20:24:04.219328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:49.517 [2024-10-16 20:24:04.219333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.219352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.219357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:49.517 [2024-10-16 20:24:04.219363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:49.517 [2024-10-16 20:24:04.219369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.219392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:49.517 [2024-10-16 20:24:04.222132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.222156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.517 [2024-10-16 20:24:04.222163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:16:49.517 [2024-10-16 20:24:04.222170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.222199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.222206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:49.517 [2024-10-16 20:24:04.222211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:49.517 [2024-10-16 20:24:04.222217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.222229] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:49.517 [2024-10-16 20:24:04.222243] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:49.517 [2024-10-16 20:24:04.222268] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:49.517 [2024-10-16 20:24:04.222280] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:49.517 [2024-10-16 20:24:04.222334] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:49.517 [2024-10-16 20:24:04.222342] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:49.517 [2024-10-16 20:24:04.222349] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:49.517 [2024-10-16 20:24:04.222356] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:49.517 [2024-10-16 20:24:04.222363] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:49.517 [2024-10-16 20:24:04.222368] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:49.517 [2024-10-16 20:24:04.222374] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:49.517 [2024-10-16 20:24:04.222379] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:49.517 [2024-10-16 20:24:04.222386] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:49.517 [2024-10-16 20:24:04.222392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.222397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:49.517 [2024-10-16 20:24:04.222403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:16:49.517 [2024-10-16 20:24:04.222409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.222459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.517 [2024-10-16 20:24:04.222471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:49.517 [2024-10-16 20:24:04.222477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:49.517 [2024-10-16 20:24:04.222483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.517 [2024-10-16 20:24:04.222538] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:49.517 [2024-10-16 20:24:04.222545] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:49.517 [2024-10-16 20:24:04.222551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.518 [2024-10-16 20:24:04.222557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:49.518 [2024-10-16 20:24:04.222568] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:49.518 [2024-10-16 20:24:04.222577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:49.518 [2024-10-16 20:24:04.222582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.518 [2024-10-16 20:24:04.222592] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:49.518 [2024-10-16 20:24:04.222597] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:49.518 [2024-10-16 20:24:04.222601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.518 [2024-10-16 20:24:04.222606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:49.518 [2024-10-16 20:24:04.222616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:49.518 [2024-10-16 20:24:04.222622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222627] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:49.518 [2024-10-16 20:24:04.222632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:49.518 [2024-10-16 20:24:04.222636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:49.518 [2024-10-16 20:24:04.222646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:49.518 [2024-10-16 20:24:04.222651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:49.518 [2024-10-16 20:24:04.222656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:49.518 [2024-10-16 20:24:04.222661] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.518 [2024-10-16 20:24:04.222670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:49.518 [2024-10-16 20:24:04.222675] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.518 [2024-10-16 20:24:04.222685] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:49.518 [2024-10-16 20:24:04.222689] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.518 [2024-10-16 20:24:04.222699] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:49.518 [2024-10-16 20:24:04.222705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.518 [2024-10-16 20:24:04.222714] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:49.518 [2024-10-16 20:24:04.222719] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.518 [2024-10-16 20:24:04.222729] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:49.518 [2024-10-16 20:24:04.222734] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:49.518 [2024-10-16 20:24:04.222739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.518 [2024-10-16 20:24:04.222743] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:49.518 [2024-10-16 20:24:04.222749] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:49.518 [2024-10-16 20:24:04.222754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.518 [2024-10-16 20:24:04.222761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.518 [2024-10-16 20:24:04.222767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:49.518 [2024-10-16 20:24:04.222771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:49.518 [2024-10-16 20:24:04.222776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:49.518 [2024-10-16 20:24:04.222782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:49.518 [2024-10-16 20:24:04.222788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:49.518 [2024-10-16 20:24:04.222793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:49.518 [2024-10-16 20:24:04.222798] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:49.518 [2024-10-16 20:24:04.222805] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.518 [2024-10-16 20:24:04.222811] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:49.518 [2024-10-16 20:24:04.222817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:49.518 [2024-10-16 20:24:04.222822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:49.518 [2024-10-16 20:24:04.222827] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:49.518 [2024-10-16 20:24:04.222832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:49.518 [2024-10-16 20:24:04.222838] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:49.518 [2024-10-16 20:24:04.222843] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:49.518 [2024-10-16 20:24:04.222848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:49.518 [2024-10-16 20:24:04.222853] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:49.518 [2024-10-16 20:24:04.222858] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:49.518 [2024-10-16 20:24:04.222863] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:49.518 [2024-10-16 20:24:04.222869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:49.518 [2024-10-16 20:24:04.222874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:49.518 [2024-10-16 20:24:04.222880] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:49.518 [2024-10-16 20:24:04.222889] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.518 [2024-10-16 20:24:04.222894] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:49.518 [2024-10-16 20:24:04.222900] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:49.518 [2024-10-16 20:24:04.222906] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:49.518 [2024-10-16 20:24:04.222911] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:49.518 [2024-10-16 20:24:04.222916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.518 [2024-10-16 20:24:04.222922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:49.518 [2024-10-16 20:24:04.222928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:16:49.518 [2024-10-16 20:24:04.222933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.518 [2024-10-16 20:24:04.234706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.518 [2024-10-16 20:24:04.234733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.518 [2024-10-16 20:24:04.234741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.742 ms 00:16:49.518 [2024-10-16 20:24:04.234747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.518 [2024-10-16 20:24:04.234833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.518 [2024-10-16 20:24:04.234840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:49.518 [2024-10-16 20:24:04.234846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:49.518 [2024-10-16 20:24:04.234851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.518 [2024-10-16 20:24:04.275193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.518 [2024-10-16 20:24:04.275227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.518 [2024-10-16 20:24:04.275236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.328 ms 00:16:49.518 [2024-10-16 20:24:04.275243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.518 [2024-10-16 20:24:04.275299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.518 [2024-10-16 20:24:04.275308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.518 [2024-10-16 20:24:04.275318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:49.518 [2024-10-16 20:24:04.275324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.518 [2024-10-16 20:24:04.275613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.518 [2024-10-16 20:24:04.275635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.518 [2024-10-16 20:24:04.275642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:16:49.518 [2024-10-16 20:24:04.275648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.518 [2024-10-16 20:24:04.275739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.518 [2024-10-16 20:24:04.275746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.518 [2024-10-16 20:24:04.275752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:49.518 [2024-10-16 20:24:04.275758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.518 [2024-10-16 20:24:04.286978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.518 [2024-10-16 20:24:04.287006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.518 [2024-10-16 20:24:04.287014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.204 ms 00:16:49.518 [2024-10-16 20:24:04.287021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.518 [2024-10-16 20:24:04.296956] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:49.518 [2024-10-16 20:24:04.296983] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:49.518 [2024-10-16 20:24:04.296991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.518 [2024-10-16 20:24:04.296997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:49.518 [2024-10-16 20:24:04.297004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.883 ms 00:16:49.518 [2024-10-16 20:24:04.297010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.518 [2024-10-16 20:24:04.315562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.518 [2024-10-16 20:24:04.315594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:49.519 [2024-10-16 20:24:04.315603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.492 ms 00:16:49.519 [2024-10-16 20:24:04.315609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.324469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.324493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:49.519 [2024-10-16 20:24:04.324505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.809 ms 00:16:49.519 [2024-10-16 20:24:04.324510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.333114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.333138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:49.519 [2024-10-16 20:24:04.333146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.565 ms 00:16:49.519 [2024-10-16 20:24:04.333151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.333417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.333436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:49.519 [2024-10-16 20:24:04.333442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:16:49.519 [2024-10-16 20:24:04.333450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.378856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.378886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:49.519 [2024-10-16 20:24:04.378895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.388 ms 00:16:49.519 [2024-10-16 20:24:04.378904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.386932] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:49.519 [2024-10-16 20:24:04.398099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.398127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:49.519 [2024-10-16 20:24:04.398137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.139 ms 00:16:49.519 [2024-10-16 20:24:04.398143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.398192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.398200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:49.519 [2024-10-16 20:24:04.398209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:49.519 [2024-10-16 20:24:04.398215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.398251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.398257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:49.519 [2024-10-16 20:24:04.398263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:49.519 [2024-10-16 20:24:04.398269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.399193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.399219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:49.519 [2024-10-16 20:24:04.399225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:16:49.519 [2024-10-16 20:24:04.399231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.399254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.399264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:49.519 [2024-10-16 20:24:04.399269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:49.519 [2024-10-16 20:24:04.399275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.399300] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:49.519 [2024-10-16 20:24:04.399307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.399313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:49.519 [2024-10-16 20:24:04.399318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:49.519 [2024-10-16 20:24:04.399324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.417022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.417054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:49.519 [2024-10-16 20:24:04.417063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.682 ms 00:16:49.519 [2024-10-16 20:24:04.417068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.417132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.519 [2024-10-16 20:24:04.417140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:49.519 [2024-10-16 20:24:04.417147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:49.519 [2024-10-16 20:24:04.417152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.519 [2024-10-16 20:24:04.417757] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:49.519 [2024-10-16 20:24:04.420154] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 215.887 ms, result 0 00:16:49.519 [2024-10-16 20:24:04.420802] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:49.519 [2024-10-16 20:24:04.435884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:50.904  [2024-10-16T20:24:06.779Z] Copying: 24/256 [MB] (24 MBps) [2024-10-16T20:24:07.725Z] Copying: 39/256 [MB] (15 MBps) [2024-10-16T20:24:08.679Z] Copying: 56/256 [MB] (16 MBps) [2024-10-16T20:24:09.650Z] Copying: 75/256 [MB] (18 MBps) [2024-10-16T20:24:10.595Z] Copying: 96/256 [MB] (20 MBps) [2024-10-16T20:24:11.538Z] Copying: 116/256 [MB] (20 MBps) [2024-10-16T20:24:12.482Z] Copying: 136/256 [MB] (20 MBps) [2024-10-16T20:24:13.871Z] Copying: 149/256 [MB] (12 MBps) [2024-10-16T20:24:14.445Z] Copying: 170/256 [MB] (21 MBps) [2024-10-16T20:24:15.829Z] Copying: 191/256 [MB] (21 MBps) [2024-10-16T20:24:16.772Z] Copying: 205/256 [MB] (14 MBps) [2024-10-16T20:24:17.718Z] Copying: 221/256 [MB] (15 MBps) [2024-10-16T20:24:18.291Z] Copying: 237/256 [MB] (16 MBps) [2024-10-16T20:24:18.291Z] Copying: 256/256 [MB] (average 18 MBps)[2024-10-16 20:24:18.200205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:03.362 [2024-10-16 20:24:18.210714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.362 [2024-10-16 20:24:18.210779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:03.362 [2024-10-16 20:24:18.210795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:03.362 [2024-10-16 20:24:18.210803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.362 [2024-10-16 20:24:18.210830] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:03.362 [2024-10-16 20:24:18.213748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.362 [2024-10-16 20:24:18.213791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:03.362 [2024-10-16 20:24:18.213802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.902 ms 00:17:03.362 [2024-10-16 20:24:18.213810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.362 [2024-10-16 20:24:18.214103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.362 [2024-10-16 20:24:18.214115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:03.362 [2024-10-16 20:24:18.214125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:17:03.362 [2024-10-16 20:24:18.214137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.362 [2024-10-16 20:24:18.217872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.362 [2024-10-16 20:24:18.217898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:03.362 [2024-10-16 20:24:18.217907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.719 ms 00:17:03.362 [2024-10-16 20:24:18.217916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.362 [2024-10-16 20:24:18.224781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.362 [2024-10-16 20:24:18.224825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:03.362 [2024-10-16 20:24:18.224837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.829 ms 00:17:03.362 [2024-10-16 20:24:18.224845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.362 [2024-10-16 20:24:18.251743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.362 [2024-10-16 20:24:18.251799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:03.362 [2024-10-16 20:24:18.251812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.818 ms 00:17:03.362 [2024-10-16 20:24:18.251821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.362 [2024-10-16 20:24:18.269234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.362 [2024-10-16 20:24:18.269286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:03.362 [2024-10-16 20:24:18.269299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.327 ms 00:17:03.362 [2024-10-16 20:24:18.269307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.362 [2024-10-16 20:24:18.269485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.362 [2024-10-16 20:24:18.269498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:03.362 [2024-10-16 20:24:18.269508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:03.362 [2024-10-16 20:24:18.269515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.624 [2024-10-16 20:24:18.296583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.624 [2024-10-16 20:24:18.296633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:03.624 [2024-10-16 20:24:18.296645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.049 ms 00:17:03.624 [2024-10-16 20:24:18.296652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.624 [2024-10-16 20:24:18.323460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.624 [2024-10-16 20:24:18.323511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:03.624 [2024-10-16 20:24:18.323522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.724 ms 00:17:03.624 [2024-10-16 20:24:18.323529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.624 [2024-10-16 20:24:18.349646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.624 [2024-10-16 20:24:18.349693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:03.624 [2024-10-16 20:24:18.349705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.035 ms 00:17:03.624 [2024-10-16 20:24:18.349712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.624 [2024-10-16 20:24:18.375418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.624 [2024-10-16 20:24:18.375466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:03.624 [2024-10-16 20:24:18.375477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.593 ms 00:17:03.624 [2024-10-16 20:24:18.375484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.624 [2024-10-16 20:24:18.375552] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:03.624 [2024-10-16 20:24:18.375569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:03.624 [2024-10-16 20:24:18.375579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:03.624 [2024-10-16 20:24:18.375587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:03.624 [2024-10-16 20:24:18.375595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:03.624 [2024-10-16 20:24:18.375603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:03.624 [2024-10-16 20:24:18.375610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:03.624 [2024-10-16 20:24:18.375618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:03.624 [2024-10-16 20:24:18.375626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:03.624 [2024-10-16 20:24:18.375634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.375996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:03.625 [2024-10-16 20:24:18.376365] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:03.625 [2024-10-16 20:24:18.376373] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71d3b1be-a432-41c7-ac06-b432fb201faa 00:17:03.625 [2024-10-16 20:24:18.376382] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:03.625 [2024-10-16 20:24:18.376390] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:03.625 [2024-10-16 20:24:18.376397] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:03.626 [2024-10-16 20:24:18.376405] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:03.626 [2024-10-16 20:24:18.376412] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:03.626 [2024-10-16 20:24:18.376423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:03.626 [2024-10-16 20:24:18.376431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:03.626 [2024-10-16 20:24:18.376437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:03.626 [2024-10-16 20:24:18.376443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:03.626 [2024-10-16 20:24:18.376450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.626 [2024-10-16 20:24:18.376458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:03.626 [2024-10-16 20:24:18.376470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:17:03.626 [2024-10-16 20:24:18.376477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.390171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.626 [2024-10-16 20:24:18.390216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:03.626 [2024-10-16 20:24:18.390235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.658 ms 00:17:03.626 [2024-10-16 20:24:18.390243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.390486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.626 [2024-10-16 20:24:18.390496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:03.626 [2024-10-16 20:24:18.390504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:17:03.626 [2024-10-16 20:24:18.390512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.432182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.432235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:03.626 [2024-10-16 20:24:18.432253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.432261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.432357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.432367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:03.626 [2024-10-16 20:24:18.432376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.432386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.432440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.432450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:03.626 [2024-10-16 20:24:18.432459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.432472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.432490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.432497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:03.626 [2024-10-16 20:24:18.432505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.432513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.515760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.515812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:03.626 [2024-10-16 20:24:18.515830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.515838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.548332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.548385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:03.626 [2024-10-16 20:24:18.548397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.548405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.548470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.548480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:03.626 [2024-10-16 20:24:18.548489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.548497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.548537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.548547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:03.626 [2024-10-16 20:24:18.548556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.548563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.548671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.548682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:03.626 [2024-10-16 20:24:18.548690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.548698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.548736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.548746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:03.626 [2024-10-16 20:24:18.548754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.548762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.548804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.548813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:03.626 [2024-10-16 20:24:18.548822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.548830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.548882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.626 [2024-10-16 20:24:18.548896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:03.626 [2024-10-16 20:24:18.548905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.626 [2024-10-16 20:24:18.548913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.626 [2024-10-16 20:24:18.549097] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.364 ms, result 0 00:17:04.571 00:17:04.571 00:17:04.571 20:24:19 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:04.571 20:24:19 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:05.142 20:24:20 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:05.403 [2024-10-16 20:24:20.087561] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:05.403 [2024-10-16 20:24:20.087699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72549 ] 00:17:05.403 [2024-10-16 20:24:20.241983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.664 [2024-10-16 20:24:20.476020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.926 [2024-10-16 20:24:20.762245] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.926 [2024-10-16 20:24:20.762325] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:06.188 [2024-10-16 20:24:20.918802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.188 [2024-10-16 20:24:20.918867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:06.188 [2024-10-16 20:24:20.918882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:06.188 [2024-10-16 20:24:20.918891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.188 [2024-10-16 20:24:20.921923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.188 [2024-10-16 20:24:20.921977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.188 [2024-10-16 20:24:20.921988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:17:06.188 [2024-10-16 20:24:20.921996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.188 [2024-10-16 20:24:20.922134] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:06.188 [2024-10-16 20:24:20.922914] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:06.188 [2024-10-16 20:24:20.922947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.188 [2024-10-16 20:24:20.922956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.188 [2024-10-16 20:24:20.922965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:17:06.188 [2024-10-16 20:24:20.922973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.188 [2024-10-16 20:24:20.924860] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:06.188 [2024-10-16 20:24:20.939305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.188 [2024-10-16 20:24:20.939356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:06.188 [2024-10-16 20:24:20.939370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.447 ms 00:17:06.188 [2024-10-16 20:24:20.939379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.188 [2024-10-16 20:24:20.939502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.188 [2024-10-16 20:24:20.939514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:06.188 [2024-10-16 20:24:20.939524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:06.188 [2024-10-16 20:24:20.939532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.189 [2024-10-16 20:24:20.947610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.189 [2024-10-16 20:24:20.947653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.189 [2024-10-16 20:24:20.947664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.029 ms 00:17:06.189 [2024-10-16 20:24:20.947678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.189 [2024-10-16 20:24:20.947795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.189 [2024-10-16 20:24:20.947807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.189 [2024-10-16 20:24:20.947817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:06.189 [2024-10-16 20:24:20.947828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.189 [2024-10-16 20:24:20.947855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.189 [2024-10-16 20:24:20.947863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:06.189 [2024-10-16 20:24:20.947872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:06.189 [2024-10-16 20:24:20.947879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.189 [2024-10-16 20:24:20.947911] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:06.189 [2024-10-16 20:24:20.952151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.189 [2024-10-16 20:24:20.952190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.189 [2024-10-16 20:24:20.952201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.256 ms 00:17:06.189 [2024-10-16 20:24:20.952213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.189 [2024-10-16 20:24:20.952288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.189 [2024-10-16 20:24:20.952298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:06.189 [2024-10-16 20:24:20.952307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:06.189 [2024-10-16 20:24:20.952315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.189 [2024-10-16 20:24:20.952334] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:06.189 [2024-10-16 20:24:20.952356] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:06.189 [2024-10-16 20:24:20.952391] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:06.189 [2024-10-16 20:24:20.952411] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:06.189 [2024-10-16 20:24:20.952487] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:06.189 [2024-10-16 20:24:20.952499] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:06.189 [2024-10-16 20:24:20.952510] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:06.189 [2024-10-16 20:24:20.952519] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:06.189 [2024-10-16 20:24:20.952529] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:06.189 [2024-10-16 20:24:20.952537] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:06.189 [2024-10-16 20:24:20.952545] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:06.189 [2024-10-16 20:24:20.952553] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:06.189 [2024-10-16 20:24:20.952564] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:06.189 [2024-10-16 20:24:20.952573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.189 [2024-10-16 20:24:20.952580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:06.189 [2024-10-16 20:24:20.952588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:17:06.189 [2024-10-16 20:24:20.952597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.189 [2024-10-16 20:24:20.952663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.189 [2024-10-16 20:24:20.952674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:06.189 [2024-10-16 20:24:20.952682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:06.189 [2024-10-16 20:24:20.952689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.189 [2024-10-16 20:24:20.952767] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:06.189 [2024-10-16 20:24:20.952777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:06.189 [2024-10-16 20:24:20.952785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.189 [2024-10-16 20:24:20.952793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.189 [2024-10-16 20:24:20.952801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:06.189 [2024-10-16 20:24:20.952807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:06.189 [2024-10-16 20:24:20.952814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:06.189 [2024-10-16 20:24:20.952821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:06.189 [2024-10-16 20:24:20.952828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:06.189 [2024-10-16 20:24:20.952835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.189 [2024-10-16 20:24:20.952844] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:06.189 [2024-10-16 20:24:20.952851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:06.189 [2024-10-16 20:24:20.952860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.189 [2024-10-16 20:24:20.952868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:06.189 [2024-10-16 20:24:20.952885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:06.189 [2024-10-16 20:24:20.952893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.189 [2024-10-16 20:24:20.952901] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:06.189 [2024-10-16 20:24:20.952908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:06.189 [2024-10-16 20:24:20.952917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.189 [2024-10-16 20:24:20.952924] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:06.189 [2024-10-16 20:24:20.952932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:06.189 [2024-10-16 20:24:20.952940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:06.189 [2024-10-16 20:24:20.952947] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:06.189 [2024-10-16 20:24:20.952958] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:06.189 [2024-10-16 20:24:20.952965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:06.189 [2024-10-16 20:24:20.952972] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:06.189 [2024-10-16 20:24:20.952979] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:06.189 [2024-10-16 20:24:20.952985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:06.189 [2024-10-16 20:24:20.952991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:06.189 [2024-10-16 20:24:20.952998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:06.189 [2024-10-16 20:24:20.953005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:06.189 [2024-10-16 20:24:20.953011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:06.189 [2024-10-16 20:24:20.953018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:06.189 [2024-10-16 20:24:20.953024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:06.189 [2024-10-16 20:24:20.953030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:06.189 [2024-10-16 20:24:20.953036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:06.189 [2024-10-16 20:24:20.953065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.189 [2024-10-16 20:24:20.953072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:06.189 [2024-10-16 20:24:20.953078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:06.189 [2024-10-16 20:24:20.953085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.189 [2024-10-16 20:24:20.953091] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:06.189 [2024-10-16 20:24:20.953099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:06.189 [2024-10-16 20:24:20.953107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.189 [2024-10-16 20:24:20.953119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.189 [2024-10-16 20:24:20.953128] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:06.189 [2024-10-16 20:24:20.953136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:06.189 [2024-10-16 20:24:20.953146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:06.189 [2024-10-16 20:24:20.953154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:06.189 [2024-10-16 20:24:20.953161] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:06.189 [2024-10-16 20:24:20.953168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:06.189 [2024-10-16 20:24:20.953177] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:06.189 [2024-10-16 20:24:20.953188] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.189 [2024-10-16 20:24:20.953197] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:06.189 [2024-10-16 20:24:20.953204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:06.189 [2024-10-16 20:24:20.953212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:06.189 [2024-10-16 20:24:20.953219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:06.189 [2024-10-16 20:24:20.953225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:06.189 [2024-10-16 20:24:20.953233] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:06.189 [2024-10-16 20:24:20.953239] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:06.189 [2024-10-16 20:24:20.953247] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:06.189 [2024-10-16 20:24:20.953254] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:06.189 [2024-10-16 20:24:20.953262] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:06.189 [2024-10-16 20:24:20.953269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:06.189 [2024-10-16 20:24:20.953276] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:06.189 [2024-10-16 20:24:20.953283] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:06.190 [2024-10-16 20:24:20.953290] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:06.190 [2024-10-16 20:24:20.953305] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.190 [2024-10-16 20:24:20.953313] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:06.190 [2024-10-16 20:24:20.953322] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:06.190 [2024-10-16 20:24:20.953330] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:06.190 [2024-10-16 20:24:20.953338] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:06.190 [2024-10-16 20:24:20.953353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:20.953361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:06.190 [2024-10-16 20:24:20.953369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:17:06.190 [2024-10-16 20:24:20.953380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:20.971709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:20.971761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.190 [2024-10-16 20:24:20.971774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.283 ms 00:17:06.190 [2024-10-16 20:24:20.971783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:20.971915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:20.971925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:06.190 [2024-10-16 20:24:20.971934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:06.190 [2024-10-16 20:24:20.971944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:21.016869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:21.016922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.190 [2024-10-16 20:24:21.016936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.900 ms 00:17:06.190 [2024-10-16 20:24:21.016945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:21.017034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:21.017060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.190 [2024-10-16 20:24:21.017075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:06.190 [2024-10-16 20:24:21.017084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:21.017669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:21.017700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.190 [2024-10-16 20:24:21.017712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:17:06.190 [2024-10-16 20:24:21.017720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:21.017864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:21.017874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.190 [2024-10-16 20:24:21.017884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:17:06.190 [2024-10-16 20:24:21.017891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:21.035175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:21.035224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.190 [2024-10-16 20:24:21.035236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.257 ms 00:17:06.190 [2024-10-16 20:24:21.035247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:21.049942] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:06.190 [2024-10-16 20:24:21.049990] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:06.190 [2024-10-16 20:24:21.050002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:21.050011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:06.190 [2024-10-16 20:24:21.050021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.634 ms 00:17:06.190 [2024-10-16 20:24:21.050028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:21.076538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:21.076594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:06.190 [2024-10-16 20:24:21.076607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.402 ms 00:17:06.190 [2024-10-16 20:24:21.076615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:21.090019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:21.090074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:06.190 [2024-10-16 20:24:21.090097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.307 ms 00:17:06.190 [2024-10-16 20:24:21.090105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:21.102975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:21.103017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:06.190 [2024-10-16 20:24:21.103029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.782 ms 00:17:06.190 [2024-10-16 20:24:21.103036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.190 [2024-10-16 20:24:21.103460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.190 [2024-10-16 20:24:21.103544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:06.190 [2024-10-16 20:24:21.103554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:17:06.190 [2024-10-16 20:24:21.103565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.451 [2024-10-16 20:24:21.171023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.452 [2024-10-16 20:24:21.171089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:06.452 [2024-10-16 20:24:21.171103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.430 ms 00:17:06.452 [2024-10-16 20:24:21.171118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.452 [2024-10-16 20:24:21.182334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:06.452 [2024-10-16 20:24:21.201196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.452 [2024-10-16 20:24:21.201242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:06.452 [2024-10-16 20:24:21.201256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.974 ms 00:17:06.452 [2024-10-16 20:24:21.201264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.452 [2024-10-16 20:24:21.201349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.452 [2024-10-16 20:24:21.201359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:06.452 [2024-10-16 20:24:21.201373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:06.452 [2024-10-16 20:24:21.201382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.452 [2024-10-16 20:24:21.201442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.452 [2024-10-16 20:24:21.201453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:06.452 [2024-10-16 20:24:21.201462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:06.452 [2024-10-16 20:24:21.201470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.452 [2024-10-16 20:24:21.202871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.452 [2024-10-16 20:24:21.202919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:06.452 [2024-10-16 20:24:21.202929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.379 ms 00:17:06.452 [2024-10-16 20:24:21.202937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.452 [2024-10-16 20:24:21.202976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.452 [2024-10-16 20:24:21.202988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:06.452 [2024-10-16 20:24:21.202996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:06.452 [2024-10-16 20:24:21.203004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.452 [2024-10-16 20:24:21.203070] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:06.452 [2024-10-16 20:24:21.203081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.452 [2024-10-16 20:24:21.203090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:06.452 [2024-10-16 20:24:21.203102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:06.452 [2024-10-16 20:24:21.203110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.452 [2024-10-16 20:24:21.229379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.452 [2024-10-16 20:24:21.229428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:06.452 [2024-10-16 20:24:21.229441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.242 ms 00:17:06.452 [2024-10-16 20:24:21.229451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.452 [2024-10-16 20:24:21.229562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.452 [2024-10-16 20:24:21.229574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:06.452 [2024-10-16 20:24:21.229584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:06.452 [2024-10-16 20:24:21.229594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.452 [2024-10-16 20:24:21.230718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:06.452 [2024-10-16 20:24:21.234334] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 311.600 ms, result 0 00:17:06.452 [2024-10-16 20:24:21.235582] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.452 [2024-10-16 20:24:21.249467] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:06.714  [2024-10-16T20:24:21.643Z] Copying: 4096/4096 [kB] (average 13 MBps)[2024-10-16 20:24:21.553917] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.714 [2024-10-16 20:24:21.563510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.714 [2024-10-16 20:24:21.563565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:06.714 [2024-10-16 20:24:21.563578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:06.714 [2024-10-16 20:24:21.563587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.714 [2024-10-16 20:24:21.563612] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:06.714 [2024-10-16 20:24:21.566623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.714 [2024-10-16 20:24:21.566664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:06.714 [2024-10-16 20:24:21.566676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:17:06.714 [2024-10-16 20:24:21.566685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.714 [2024-10-16 20:24:21.569834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.714 [2024-10-16 20:24:21.569882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:06.714 [2024-10-16 20:24:21.569892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.121 ms 00:17:06.714 [2024-10-16 20:24:21.569908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.714 [2024-10-16 20:24:21.574383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.714 [2024-10-16 20:24:21.574419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:06.714 [2024-10-16 20:24:21.574431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.457 ms 00:17:06.714 [2024-10-16 20:24:21.574439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.714 [2024-10-16 20:24:21.581333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.714 [2024-10-16 20:24:21.581376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:06.714 [2024-10-16 20:24:21.581387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:17:06.714 [2024-10-16 20:24:21.581402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.714 [2024-10-16 20:24:21.607192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.714 [2024-10-16 20:24:21.607241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:06.714 [2024-10-16 20:24:21.607254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.731 ms 00:17:06.714 [2024-10-16 20:24:21.607260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.714 [2024-10-16 20:24:21.624020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.714 [2024-10-16 20:24:21.624081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:06.714 [2024-10-16 20:24:21.624093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.692 ms 00:17:06.714 [2024-10-16 20:24:21.624101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.714 [2024-10-16 20:24:21.624274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.714 [2024-10-16 20:24:21.624287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:06.714 [2024-10-16 20:24:21.624296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:06.714 [2024-10-16 20:24:21.624304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.977 [2024-10-16 20:24:21.650333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.977 [2024-10-16 20:24:21.650381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:06.977 [2024-10-16 20:24:21.650393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.010 ms 00:17:06.977 [2024-10-16 20:24:21.650399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.977 [2024-10-16 20:24:21.676511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.977 [2024-10-16 20:24:21.676555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:06.977 [2024-10-16 20:24:21.676566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.046 ms 00:17:06.977 [2024-10-16 20:24:21.676574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.977 [2024-10-16 20:24:21.701862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.977 [2024-10-16 20:24:21.701907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:06.977 [2024-10-16 20:24:21.701920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.208 ms 00:17:06.977 [2024-10-16 20:24:21.701927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.977 [2024-10-16 20:24:21.727169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.977 [2024-10-16 20:24:21.727216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:06.977 [2024-10-16 20:24:21.727227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.131 ms 00:17:06.977 [2024-10-16 20:24:21.727234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.977 [2024-10-16 20:24:21.727301] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:06.977 [2024-10-16 20:24:21.727318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:06.977 [2024-10-16 20:24:21.727486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.727999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:06.978 [2024-10-16 20:24:21.728132] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:06.978 [2024-10-16 20:24:21.728141] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71d3b1be-a432-41c7-ac06-b432fb201faa 00:17:06.978 [2024-10-16 20:24:21.728150] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:06.978 [2024-10-16 20:24:21.728159] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:06.978 [2024-10-16 20:24:21.728168] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:06.978 [2024-10-16 20:24:21.728177] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:06.978 [2024-10-16 20:24:21.728188] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:06.978 [2024-10-16 20:24:21.728197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:06.978 [2024-10-16 20:24:21.728205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:06.978 [2024-10-16 20:24:21.728212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:06.978 [2024-10-16 20:24:21.728218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:06.978 [2024-10-16 20:24:21.728226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.978 [2024-10-16 20:24:21.728235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:06.978 [2024-10-16 20:24:21.728244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:17:06.978 [2024-10-16 20:24:21.728255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.978 [2024-10-16 20:24:21.741521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.978 [2024-10-16 20:24:21.741566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:06.978 [2024-10-16 20:24:21.741584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.233 ms 00:17:06.978 [2024-10-16 20:24:21.741592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.978 [2024-10-16 20:24:21.741850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.978 [2024-10-16 20:24:21.741975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:06.978 [2024-10-16 20:24:21.741984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:17:06.978 [2024-10-16 20:24:21.741993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.783500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.783548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.979 [2024-10-16 20:24:21.783566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.783574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.783670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.783681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.979 [2024-10-16 20:24:21.783690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.783697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.783750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.783761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.979 [2024-10-16 20:24:21.783769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.783783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.783800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.783808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.979 [2024-10-16 20:24:21.783815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.783822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.865011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.865071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.979 [2024-10-16 20:24:21.865090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.865098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.897409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.897457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.979 [2024-10-16 20:24:21.897470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.897478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.897539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.897548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.979 [2024-10-16 20:24:21.897558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.897566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.897618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.897627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.979 [2024-10-16 20:24:21.897635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.897644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.897751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.897763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.979 [2024-10-16 20:24:21.897771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.897779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.897813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.897823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:06.979 [2024-10-16 20:24:21.897831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.897839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.897879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.897888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.979 [2024-10-16 20:24:21.897896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.897904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.897954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.979 [2024-10-16 20:24:21.897968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.979 [2024-10-16 20:24:21.897977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.979 [2024-10-16 20:24:21.897986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.979 [2024-10-16 20:24:21.898166] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.659 ms, result 0 00:17:07.924 00:17:07.924 00:17:07.924 20:24:22 -- ftl/trim.sh@93 -- # svcpid=72585 00:17:07.924 20:24:22 -- ftl/trim.sh@94 -- # waitforlisten 72585 00:17:07.924 20:24:22 -- common/autotest_common.sh@819 -- # '[' -z 72585 ']' 00:17:07.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.924 20:24:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.924 20:24:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:07.924 20:24:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.924 20:24:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:07.924 20:24:22 -- common/autotest_common.sh@10 -- # set +x 00:17:07.924 20:24:22 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:08.185 [2024-10-16 20:24:22.867668] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:08.185 [2024-10-16 20:24:22.867770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72585 ] 00:17:08.185 [2024-10-16 20:24:23.013845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.446 [2024-10-16 20:24:23.237825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:08.447 [2024-10-16 20:24:23.238070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.835 20:24:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.835 20:24:24 -- common/autotest_common.sh@852 -- # return 0 00:17:09.835 20:24:24 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:09.835 [2024-10-16 20:24:24.609827] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:09.835 [2024-10-16 20:24:24.609903] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.100 [2024-10-16 20:24:24.782395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.100 [2024-10-16 20:24:24.782456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:10.100 [2024-10-16 20:24:24.782474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:10.101 [2024-10-16 20:24:24.782482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.785408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.785461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.101 [2024-10-16 20:24:24.785475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:17:10.101 [2024-10-16 20:24:24.785483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.785618] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:10.101 [2024-10-16 20:24:24.786679] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:10.101 [2024-10-16 20:24:24.786748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.786760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.101 [2024-10-16 20:24:24.786773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:17:10.101 [2024-10-16 20:24:24.786781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.788660] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:10.101 [2024-10-16 20:24:24.800709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.800761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:10.101 [2024-10-16 20:24:24.800773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.059 ms 00:17:10.101 [2024-10-16 20:24:24.800782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.800878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.800889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:10.101 [2024-10-16 20:24:24.800898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:10.101 [2024-10-16 20:24:24.800908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.808411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.808455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.101 [2024-10-16 20:24:24.808465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.457 ms 00:17:10.101 [2024-10-16 20:24:24.808473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.808564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.808575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.101 [2024-10-16 20:24:24.808582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:10.101 [2024-10-16 20:24:24.808590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.808614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.808624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:10.101 [2024-10-16 20:24:24.808630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:10.101 [2024-10-16 20:24:24.808639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.808664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:10.101 [2024-10-16 20:24:24.812083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.812120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.101 [2024-10-16 20:24:24.812130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.427 ms 00:17:10.101 [2024-10-16 20:24:24.812137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.812184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.812192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:10.101 [2024-10-16 20:24:24.812201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:10.101 [2024-10-16 20:24:24.812210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.812228] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:10.101 [2024-10-16 20:24:24.812247] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:10.101 [2024-10-16 20:24:24.812277] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:10.101 [2024-10-16 20:24:24.812291] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:10.101 [2024-10-16 20:24:24.812352] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:10.101 [2024-10-16 20:24:24.812363] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:10.101 [2024-10-16 20:24:24.812378] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:10.101 [2024-10-16 20:24:24.812386] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:10.101 [2024-10-16 20:24:24.812395] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:10.101 [2024-10-16 20:24:24.812401] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:10.101 [2024-10-16 20:24:24.812409] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:10.101 [2024-10-16 20:24:24.812415] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:10.101 [2024-10-16 20:24:24.812424] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:10.101 [2024-10-16 20:24:24.812430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.812438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:10.101 [2024-10-16 20:24:24.812444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:17:10.101 [2024-10-16 20:24:24.812451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.812503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.101 [2024-10-16 20:24:24.812512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:10.101 [2024-10-16 20:24:24.812519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:10.101 [2024-10-16 20:24:24.812526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.101 [2024-10-16 20:24:24.812598] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:10.101 [2024-10-16 20:24:24.812610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:10.101 [2024-10-16 20:24:24.812616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.101 [2024-10-16 20:24:24.812625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812632] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:10.101 [2024-10-16 20:24:24.812639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:10.101 [2024-10-16 20:24:24.812657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:10.101 [2024-10-16 20:24:24.812664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.101 [2024-10-16 20:24:24.812680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:10.101 [2024-10-16 20:24:24.812687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:10.101 [2024-10-16 20:24:24.812693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.101 [2024-10-16 20:24:24.812700] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:10.101 [2024-10-16 20:24:24.812707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:10.101 [2024-10-16 20:24:24.812714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:10.101 [2024-10-16 20:24:24.812728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:10.101 [2024-10-16 20:24:24.812733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:10.101 [2024-10-16 20:24:24.812745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:10.101 [2024-10-16 20:24:24.812752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:10.101 [2024-10-16 20:24:24.812758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:10.101 [2024-10-16 20:24:24.812767] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:10.101 [2024-10-16 20:24:24.812786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:10.101 [2024-10-16 20:24:24.812792] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:10.101 [2024-10-16 20:24:24.812804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:10.101 [2024-10-16 20:24:24.812811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:10.101 [2024-10-16 20:24:24.812825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:10.101 [2024-10-16 20:24:24.812831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:10.101 [2024-10-16 20:24:24.812843] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:10.101 [2024-10-16 20:24:24.812850] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.101 [2024-10-16 20:24:24.812864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:10.101 [2024-10-16 20:24:24.812870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:10.101 [2024-10-16 20:24:24.812878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.101 [2024-10-16 20:24:24.812884] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:10.101 [2024-10-16 20:24:24.812894] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:10.101 [2024-10-16 20:24:24.812901] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.101 [2024-10-16 20:24:24.812909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.101 [2024-10-16 20:24:24.812915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:10.101 [2024-10-16 20:24:24.812921] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:10.101 [2024-10-16 20:24:24.812926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:10.101 [2024-10-16 20:24:24.812933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:10.101 [2024-10-16 20:24:24.812938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:10.102 [2024-10-16 20:24:24.812945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:10.102 [2024-10-16 20:24:24.812951] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:10.102 [2024-10-16 20:24:24.812960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.102 [2024-10-16 20:24:24.812967] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:10.102 [2024-10-16 20:24:24.812974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:10.102 [2024-10-16 20:24:24.812979] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:10.102 [2024-10-16 20:24:24.812989] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:10.102 [2024-10-16 20:24:24.812995] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:10.102 [2024-10-16 20:24:24.813002] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:10.102 [2024-10-16 20:24:24.813007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:10.102 [2024-10-16 20:24:24.813015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:10.102 [2024-10-16 20:24:24.813021] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:10.102 [2024-10-16 20:24:24.813028] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:10.102 [2024-10-16 20:24:24.813033] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:10.102 [2024-10-16 20:24:24.813055] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:10.102 [2024-10-16 20:24:24.813062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:10.102 [2024-10-16 20:24:24.813068] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:10.102 [2024-10-16 20:24:24.813075] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.102 [2024-10-16 20:24:24.813083] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:10.102 [2024-10-16 20:24:24.813090] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:10.102 [2024-10-16 20:24:24.813101] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:10.102 [2024-10-16 20:24:24.813108] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:10.102 [2024-10-16 20:24:24.813117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.813125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:10.102 [2024-10-16 20:24:24.813133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:17:10.102 [2024-10-16 20:24:24.813139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.827910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.827952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.102 [2024-10-16 20:24:24.827967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.729 ms 00:17:10.102 [2024-10-16 20:24:24.827977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.828104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.828125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:10.102 [2024-10-16 20:24:24.828135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:10.102 [2024-10-16 20:24:24.828142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.854878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.854912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.102 [2024-10-16 20:24:24.854922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.715 ms 00:17:10.102 [2024-10-16 20:24:24.854929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.854983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.854993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.102 [2024-10-16 20:24:24.855000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:10.102 [2024-10-16 20:24:24.855006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.855378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.855402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.102 [2024-10-16 20:24:24.855412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:17:10.102 [2024-10-16 20:24:24.855419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.855519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.855527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.102 [2024-10-16 20:24:24.855537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:10.102 [2024-10-16 20:24:24.855542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.868607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.868637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.102 [2024-10-16 20:24:24.868649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.047 ms 00:17:10.102 [2024-10-16 20:24:24.868654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.879119] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:10.102 [2024-10-16 20:24:24.879149] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:10.102 [2024-10-16 20:24:24.879160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.879166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:10.102 [2024-10-16 20:24:24.879175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.423 ms 00:17:10.102 [2024-10-16 20:24:24.879181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.898304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.898332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:10.102 [2024-10-16 20:24:24.898343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.063 ms 00:17:10.102 [2024-10-16 20:24:24.898349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.907743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.907775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:10.102 [2024-10-16 20:24:24.907784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.337 ms 00:17:10.102 [2024-10-16 20:24:24.907789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.916992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.917019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:10.102 [2024-10-16 20:24:24.917030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.159 ms 00:17:10.102 [2024-10-16 20:24:24.917035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.917316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.917345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:10.102 [2024-10-16 20:24:24.917355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:10.102 [2024-10-16 20:24:24.917361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.963765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.963805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:10.102 [2024-10-16 20:24:24.963817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.387 ms 00:17:10.102 [2024-10-16 20:24:24.963823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.971825] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:10.102 [2024-10-16 20:24:24.983189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.983221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:10.102 [2024-10-16 20:24:24.983230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.310 ms 00:17:10.102 [2024-10-16 20:24:24.983238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.983287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.983298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:10.102 [2024-10-16 20:24:24.983304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.102 [2024-10-16 20:24:24.983313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.983351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.983358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:10.102 [2024-10-16 20:24:24.983364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:10.102 [2024-10-16 20:24:24.983371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.984297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.984324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:10.102 [2024-10-16 20:24:24.984331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:17:10.102 [2024-10-16 20:24:24.984337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.984362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.984372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:10.102 [2024-10-16 20:24:24.984378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.102 [2024-10-16 20:24:24.984384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:24.984410] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:10.102 [2024-10-16 20:24:24.984419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:24.984425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:10.102 [2024-10-16 20:24:24.984432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:10.102 [2024-10-16 20:24:24.984438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.102 [2024-10-16 20:24:25.003358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.102 [2024-10-16 20:24:25.003384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:10.102 [2024-10-16 20:24:25.003394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.901 ms 00:17:10.102 [2024-10-16 20:24:25.003400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.103 [2024-10-16 20:24:25.003467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.103 [2024-10-16 20:24:25.003475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:10.103 [2024-10-16 20:24:25.003483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:10.103 [2024-10-16 20:24:25.003490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.103 [2024-10-16 20:24:25.004093] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.103 [2024-10-16 20:24:25.006529] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.495 ms, result 0 00:17:10.103 [2024-10-16 20:24:25.008112] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:10.425 Some configs were skipped because the RPC state that can call them passed over. 00:17:10.425 20:24:25 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:10.425 [2024-10-16 20:24:25.234582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.425 [2024-10-16 20:24:25.234621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:10.425 [2024-10-16 20:24:25.234629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.608 ms 00:17:10.425 [2024-10-16 20:24:25.234636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.425 [2024-10-16 20:24:25.234664] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.689 ms, result 0 00:17:10.425 true 00:17:10.425 20:24:25 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:10.686 [2024-10-16 20:24:25.441733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.686 [2024-10-16 20:24:25.441765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:10.686 [2024-10-16 20:24:25.441775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.893 ms 00:17:10.686 [2024-10-16 20:24:25.441781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.686 [2024-10-16 20:24:25.441809] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.968 ms, result 0 00:17:10.686 true 00:17:10.686 20:24:25 -- ftl/trim.sh@102 -- # killprocess 72585 00:17:10.686 20:24:25 -- common/autotest_common.sh@926 -- # '[' -z 72585 ']' 00:17:10.686 20:24:25 -- common/autotest_common.sh@930 -- # kill -0 72585 00:17:10.686 20:24:25 -- common/autotest_common.sh@931 -- # uname 00:17:10.686 20:24:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:10.686 20:24:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72585 00:17:10.686 killing process with pid 72585 00:17:10.686 20:24:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:10.686 20:24:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:10.686 20:24:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72585' 00:17:10.686 20:24:25 -- common/autotest_common.sh@945 -- # kill 72585 00:17:10.686 20:24:25 -- common/autotest_common.sh@950 -- # wait 72585 00:17:11.260 [2024-10-16 20:24:26.014321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.014365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:11.260 [2024-10-16 20:24:26.014375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:11.260 [2024-10-16 20:24:26.014384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.014402] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:11.260 [2024-10-16 20:24:26.016466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.016491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:11.260 [2024-10-16 20:24:26.016502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.050 ms 00:17:11.260 [2024-10-16 20:24:26.016509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.016718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.016730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:11.260 [2024-10-16 20:24:26.016738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:17:11.260 [2024-10-16 20:24:26.016744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.019958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.019986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:11.260 [2024-10-16 20:24:26.019995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:17:11.260 [2024-10-16 20:24:26.020001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.025296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.025329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:11.260 [2024-10-16 20:24:26.025338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.268 ms 00:17:11.260 [2024-10-16 20:24:26.025347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.032816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.032843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:11.260 [2024-10-16 20:24:26.032853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.418 ms 00:17:11.260 [2024-10-16 20:24:26.032859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.039393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.039422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:11.260 [2024-10-16 20:24:26.039432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.503 ms 00:17:11.260 [2024-10-16 20:24:26.039437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.039539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.039551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:11.260 [2024-10-16 20:24:26.039559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:11.260 [2024-10-16 20:24:26.039565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.047723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.047747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:11.260 [2024-10-16 20:24:26.047756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.142 ms 00:17:11.260 [2024-10-16 20:24:26.047762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.055505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.055531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:11.260 [2024-10-16 20:24:26.055543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.713 ms 00:17:11.260 [2024-10-16 20:24:26.055548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.062719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.062746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:11.260 [2024-10-16 20:24:26.062754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.133 ms 00:17:11.260 [2024-10-16 20:24:26.062759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.069877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.260 [2024-10-16 20:24:26.069902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:11.260 [2024-10-16 20:24:26.069910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.062 ms 00:17:11.260 [2024-10-16 20:24:26.069915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.260 [2024-10-16 20:24:26.069941] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:11.260 [2024-10-16 20:24:26.069953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.069963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.069969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.069976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.069982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.069991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.069996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:11.260 [2024-10-16 20:24:26.070294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:11.261 [2024-10-16 20:24:26.070610] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:11.261 [2024-10-16 20:24:26.070618] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71d3b1be-a432-41c7-ac06-b432fb201faa 00:17:11.261 [2024-10-16 20:24:26.070624] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:11.261 [2024-10-16 20:24:26.070631] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:11.261 [2024-10-16 20:24:26.070636] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:11.261 [2024-10-16 20:24:26.070643] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:11.261 [2024-10-16 20:24:26.070649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:11.261 [2024-10-16 20:24:26.070656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:11.261 [2024-10-16 20:24:26.070661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:11.261 [2024-10-16 20:24:26.070667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:11.261 [2024-10-16 20:24:26.070672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:11.261 [2024-10-16 20:24:26.070679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.261 [2024-10-16 20:24:26.070685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:11.261 [2024-10-16 20:24:26.070692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:17:11.261 [2024-10-16 20:24:26.070699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.261 [2024-10-16 20:24:26.080172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.261 [2024-10-16 20:24:26.080197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:11.261 [2024-10-16 20:24:26.080207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.457 ms 00:17:11.261 [2024-10-16 20:24:26.080213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.261 [2024-10-16 20:24:26.080377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.261 [2024-10-16 20:24:26.080393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:11.261 [2024-10-16 20:24:26.080402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:17:11.261 [2024-10-16 20:24:26.080408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.261 [2024-10-16 20:24:26.114864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.261 [2024-10-16 20:24:26.114893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:11.261 [2024-10-16 20:24:26.114902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.261 [2024-10-16 20:24:26.114908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.261 [2024-10-16 20:24:26.114966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.261 [2024-10-16 20:24:26.114973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:11.261 [2024-10-16 20:24:26.114982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.261 [2024-10-16 20:24:26.114987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.261 [2024-10-16 20:24:26.115021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.261 [2024-10-16 20:24:26.115028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:11.261 [2024-10-16 20:24:26.115036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.261 [2024-10-16 20:24:26.115052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.261 [2024-10-16 20:24:26.115066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.261 [2024-10-16 20:24:26.115072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:11.261 [2024-10-16 20:24:26.115079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.261 [2024-10-16 20:24:26.115086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.261 [2024-10-16 20:24:26.175029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.261 [2024-10-16 20:24:26.175069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:11.261 [2024-10-16 20:24:26.175079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.261 [2024-10-16 20:24:26.175086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.523 [2024-10-16 20:24:26.197248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.523 [2024-10-16 20:24:26.197275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:11.523 [2024-10-16 20:24:26.197286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.523 [2024-10-16 20:24:26.197292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.523 [2024-10-16 20:24:26.197328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.523 [2024-10-16 20:24:26.197336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:11.523 [2024-10-16 20:24:26.197344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.523 [2024-10-16 20:24:26.197350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.523 [2024-10-16 20:24:26.197374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.523 [2024-10-16 20:24:26.197380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:11.523 [2024-10-16 20:24:26.197387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.523 [2024-10-16 20:24:26.197393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.523 [2024-10-16 20:24:26.197463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.523 [2024-10-16 20:24:26.197470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:11.523 [2024-10-16 20:24:26.197478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.523 [2024-10-16 20:24:26.197483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.523 [2024-10-16 20:24:26.197507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.523 [2024-10-16 20:24:26.197514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:11.523 [2024-10-16 20:24:26.197521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.523 [2024-10-16 20:24:26.197526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.523 [2024-10-16 20:24:26.197556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.523 [2024-10-16 20:24:26.197562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:11.523 [2024-10-16 20:24:26.197571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.523 [2024-10-16 20:24:26.197576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.523 [2024-10-16 20:24:26.197625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.523 [2024-10-16 20:24:26.197632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:11.523 [2024-10-16 20:24:26.197639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.523 [2024-10-16 20:24:26.197644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.523 [2024-10-16 20:24:26.197747] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 183.411 ms, result 0 00:17:12.092 20:24:26 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:12.092 [2024-10-16 20:24:26.885228] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:12.092 [2024-10-16 20:24:26.885313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72640 ] 00:17:12.351 [2024-10-16 20:24:27.024434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.351 [2024-10-16 20:24:27.167156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.612 [2024-10-16 20:24:27.371068] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:12.612 [2024-10-16 20:24:27.371114] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:12.612 [2024-10-16 20:24:27.521631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.612 [2024-10-16 20:24:27.521673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:12.612 [2024-10-16 20:24:27.521686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:12.612 [2024-10-16 20:24:27.521695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.612 [2024-10-16 20:24:27.524314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.612 [2024-10-16 20:24:27.524349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.613 [2024-10-16 20:24:27.524359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:17:12.613 [2024-10-16 20:24:27.524367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.613 [2024-10-16 20:24:27.524442] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:12.613 [2024-10-16 20:24:27.525238] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:12.613 [2024-10-16 20:24:27.525270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.613 [2024-10-16 20:24:27.525278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.613 [2024-10-16 20:24:27.525287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:17:12.613 [2024-10-16 20:24:27.525295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.613 [2024-10-16 20:24:27.526599] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:12.613 [2024-10-16 20:24:27.539905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.613 [2024-10-16 20:24:27.539934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:12.613 [2024-10-16 20:24:27.539946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.307 ms 00:17:12.613 [2024-10-16 20:24:27.539953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.613 [2024-10-16 20:24:27.540053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.613 [2024-10-16 20:24:27.540065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:12.613 [2024-10-16 20:24:27.540073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:12.613 [2024-10-16 20:24:27.540081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.875 [2024-10-16 20:24:27.545825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.875 [2024-10-16 20:24:27.545852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.875 [2024-10-16 20:24:27.545865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.702 ms 00:17:12.875 [2024-10-16 20:24:27.545873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.875 [2024-10-16 20:24:27.545969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.875 [2024-10-16 20:24:27.545979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.876 [2024-10-16 20:24:27.545987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:12.876 [2024-10-16 20:24:27.545994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.876 [2024-10-16 20:24:27.546019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.876 [2024-10-16 20:24:27.546027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:12.876 [2024-10-16 20:24:27.546034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:12.876 [2024-10-16 20:24:27.546054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.876 [2024-10-16 20:24:27.546084] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:12.876 [2024-10-16 20:24:27.549731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.876 [2024-10-16 20:24:27.549755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.876 [2024-10-16 20:24:27.549767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:17:12.876 [2024-10-16 20:24:27.549774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.876 [2024-10-16 20:24:27.549812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.876 [2024-10-16 20:24:27.549821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:12.876 [2024-10-16 20:24:27.549829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:12.876 [2024-10-16 20:24:27.549836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.876 [2024-10-16 20:24:27.549853] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:12.876 [2024-10-16 20:24:27.549870] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:12.876 [2024-10-16 20:24:27.549902] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:12.876 [2024-10-16 20:24:27.549919] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:12.876 [2024-10-16 20:24:27.549991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:12.876 [2024-10-16 20:24:27.550001] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:12.876 [2024-10-16 20:24:27.550011] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:12.876 [2024-10-16 20:24:27.550020] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550029] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550037] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:12.876 [2024-10-16 20:24:27.550055] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:12.876 [2024-10-16 20:24:27.550065] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:12.876 [2024-10-16 20:24:27.550072] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:12.876 [2024-10-16 20:24:27.550079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.876 [2024-10-16 20:24:27.550087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:12.876 [2024-10-16 20:24:27.550095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:17:12.876 [2024-10-16 20:24:27.550101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.876 [2024-10-16 20:24:27.550188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.876 [2024-10-16 20:24:27.550198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:12.876 [2024-10-16 20:24:27.550205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:12.876 [2024-10-16 20:24:27.550212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.876 [2024-10-16 20:24:27.550288] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:12.876 [2024-10-16 20:24:27.550298] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:12.876 [2024-10-16 20:24:27.550306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550320] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:12.876 [2024-10-16 20:24:27.550327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550341] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:12.876 [2024-10-16 20:24:27.550348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:12.876 [2024-10-16 20:24:27.550361] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:12.876 [2024-10-16 20:24:27.550367] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:12.876 [2024-10-16 20:24:27.550374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:12.876 [2024-10-16 20:24:27.550380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:12.876 [2024-10-16 20:24:27.550395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:12.876 [2024-10-16 20:24:27.550402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550409] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:12.876 [2024-10-16 20:24:27.550415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:12.876 [2024-10-16 20:24:27.550421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:12.876 [2024-10-16 20:24:27.550435] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:12.876 [2024-10-16 20:24:27.550441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:12.876 [2024-10-16 20:24:27.550455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550467] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:12.876 [2024-10-16 20:24:27.550474] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:12.876 [2024-10-16 20:24:27.550492] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550505] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:12.876 [2024-10-16 20:24:27.550511] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:12.876 [2024-10-16 20:24:27.550530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:12.876 [2024-10-16 20:24:27.550542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:12.876 [2024-10-16 20:24:27.550549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:12.876 [2024-10-16 20:24:27.550555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:12.876 [2024-10-16 20:24:27.550561] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:12.876 [2024-10-16 20:24:27.550568] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:12.876 [2024-10-16 20:24:27.550577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.876 [2024-10-16 20:24:27.550592] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:12.876 [2024-10-16 20:24:27.550598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:12.876 [2024-10-16 20:24:27.550605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:12.876 [2024-10-16 20:24:27.550613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:12.876 [2024-10-16 20:24:27.550619] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:12.876 [2024-10-16 20:24:27.550626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:12.876 [2024-10-16 20:24:27.550633] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:12.876 [2024-10-16 20:24:27.550643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:12.876 [2024-10-16 20:24:27.550655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:12.876 [2024-10-16 20:24:27.550663] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:12.876 [2024-10-16 20:24:27.550670] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:12.876 [2024-10-16 20:24:27.550677] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:12.876 [2024-10-16 20:24:27.550684] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:12.876 [2024-10-16 20:24:27.550691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:12.876 [2024-10-16 20:24:27.550698] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:12.876 [2024-10-16 20:24:27.550705] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:12.876 [2024-10-16 20:24:27.550712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:12.876 [2024-10-16 20:24:27.550718] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:12.877 [2024-10-16 20:24:27.550725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:12.877 [2024-10-16 20:24:27.550732] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:12.877 [2024-10-16 20:24:27.550740] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:12.877 [2024-10-16 20:24:27.550746] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:12.877 [2024-10-16 20:24:27.550754] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:12.877 [2024-10-16 20:24:27.550762] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:12.877 [2024-10-16 20:24:27.550769] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:12.877 [2024-10-16 20:24:27.550777] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:12.877 [2024-10-16 20:24:27.550784] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:12.877 [2024-10-16 20:24:27.550791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.550798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:12.877 [2024-10-16 20:24:27.550805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:17:12.877 [2024-10-16 20:24:27.550812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.566566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.566600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.877 [2024-10-16 20:24:27.566613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.711 ms 00:17:12.877 [2024-10-16 20:24:27.566622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.566743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.566754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:12.877 [2024-10-16 20:24:27.566763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:12.877 [2024-10-16 20:24:27.566771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.611313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.611357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.877 [2024-10-16 20:24:27.611370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.515 ms 00:17:12.877 [2024-10-16 20:24:27.611378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.611461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.611472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.877 [2024-10-16 20:24:27.611486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:12.877 [2024-10-16 20:24:27.611494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.612003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.612036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.877 [2024-10-16 20:24:27.612063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:17:12.877 [2024-10-16 20:24:27.612072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.612209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.612220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.877 [2024-10-16 20:24:27.612228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:17:12.877 [2024-10-16 20:24:27.612239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.629317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.629354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.877 [2024-10-16 20:24:27.629368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.055 ms 00:17:12.877 [2024-10-16 20:24:27.629375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.643384] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:12.877 [2024-10-16 20:24:27.643424] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:12.877 [2024-10-16 20:24:27.643436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.643444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:12.877 [2024-10-16 20:24:27.643454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.950 ms 00:17:12.877 [2024-10-16 20:24:27.643462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.669645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.669698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:12.877 [2024-10-16 20:24:27.669710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.093 ms 00:17:12.877 [2024-10-16 20:24:27.669717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.682832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.682867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:12.877 [2024-10-16 20:24:27.682888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.024 ms 00:17:12.877 [2024-10-16 20:24:27.682895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.695582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.695624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:12.877 [2024-10-16 20:24:27.695636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.604 ms 00:17:12.877 [2024-10-16 20:24:27.695642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.696076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.696095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:12.877 [2024-10-16 20:24:27.696105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:17:12.877 [2024-10-16 20:24:27.696116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.763884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.763935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:12.877 [2024-10-16 20:24:27.763956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.744 ms 00:17:12.877 [2024-10-16 20:24:27.763965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.775357] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:12.877 [2024-10-16 20:24:27.793938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.794184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:12.877 [2024-10-16 20:24:27.794207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.840 ms 00:17:12.877 [2024-10-16 20:24:27.794216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.794302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.794316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:12.877 [2024-10-16 20:24:27.794326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:12.877 [2024-10-16 20:24:27.794337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.794394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.794404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:12.877 [2024-10-16 20:24:27.794412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:12.877 [2024-10-16 20:24:27.794420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.795774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.795817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:12.877 [2024-10-16 20:24:27.795828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:17:12.877 [2024-10-16 20:24:27.795835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.795875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.795884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:12.877 [2024-10-16 20:24:27.795893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:12.877 [2024-10-16 20:24:27.795900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.877 [2024-10-16 20:24:27.795939] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:12.877 [2024-10-16 20:24:27.795948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.877 [2024-10-16 20:24:27.795957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:12.877 [2024-10-16 20:24:27.795965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:12.877 [2024-10-16 20:24:27.795973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.140 [2024-10-16 20:24:27.822259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.140 [2024-10-16 20:24:27.822306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:13.140 [2024-10-16 20:24:27.822319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.262 ms 00:17:13.140 [2024-10-16 20:24:27.822326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.140 [2024-10-16 20:24:27.822434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.140 [2024-10-16 20:24:27.822445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:13.140 [2024-10-16 20:24:27.822455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:13.140 [2024-10-16 20:24:27.822463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.140 [2024-10-16 20:24:27.823488] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:13.140 [2024-10-16 20:24:27.827077] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.540 ms, result 0 00:17:13.140 [2024-10-16 20:24:27.828409] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:13.140 [2024-10-16 20:24:27.842557] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:14.083  [2024-10-16T20:24:29.956Z] Copying: 20/256 [MB] (20 MBps) [2024-10-16T20:24:31.345Z] Copying: 37/256 [MB] (17 MBps) [2024-10-16T20:24:31.917Z] Copying: 50/256 [MB] (13 MBps) [2024-10-16T20:24:33.303Z] Copying: 65/256 [MB] (14 MBps) [2024-10-16T20:24:34.248Z] Copying: 81/256 [MB] (15 MBps) [2024-10-16T20:24:35.193Z] Copying: 93/256 [MB] (11 MBps) [2024-10-16T20:24:36.138Z] Copying: 111/256 [MB] (18 MBps) [2024-10-16T20:24:37.082Z] Copying: 122/256 [MB] (10 MBps) [2024-10-16T20:24:38.026Z] Copying: 135/256 [MB] (13 MBps) [2024-10-16T20:24:38.971Z] Copying: 152/256 [MB] (16 MBps) [2024-10-16T20:24:39.914Z] Copying: 169/256 [MB] (17 MBps) [2024-10-16T20:24:41.301Z] Copying: 185/256 [MB] (16 MBps) [2024-10-16T20:24:41.933Z] Copying: 208/256 [MB] (22 MBps) [2024-10-16T20:24:43.322Z] Copying: 229/256 [MB] (21 MBps) [2024-10-16T20:24:43.322Z] Copying: 250/256 [MB] (20 MBps) [2024-10-16T20:24:43.585Z] Copying: 256/256 [MB] (average 16 MBps)[2024-10-16 20:24:43.390128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:28.656 [2024-10-16 20:24:43.405280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.656 [2024-10-16 20:24:43.405453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:28.656 [2024-10-16 20:24:43.405476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:28.656 [2024-10-16 20:24:43.405484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.656 [2024-10-16 20:24:43.405516] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:28.656 [2024-10-16 20:24:43.408470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.656 [2024-10-16 20:24:43.408614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:28.656 [2024-10-16 20:24:43.408635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.937 ms 00:17:28.656 [2024-10-16 20:24:43.408643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.656 [2024-10-16 20:24:43.408938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.656 [2024-10-16 20:24:43.408949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:28.656 [2024-10-16 20:24:43.408962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:28.656 [2024-10-16 20:24:43.408970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.656 [2024-10-16 20:24:43.412699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.656 [2024-10-16 20:24:43.412725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:28.656 [2024-10-16 20:24:43.412735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:17:28.656 [2024-10-16 20:24:43.412745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.656 [2024-10-16 20:24:43.419651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.656 [2024-10-16 20:24:43.419805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:28.656 [2024-10-16 20:24:43.419825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.871 ms 00:17:28.656 [2024-10-16 20:24:43.419841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.656 [2024-10-16 20:24:43.445250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.656 [2024-10-16 20:24:43.445296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:28.656 [2024-10-16 20:24:43.445308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.334 ms 00:17:28.656 [2024-10-16 20:24:43.445316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.657 [2024-10-16 20:24:43.461976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.657 [2024-10-16 20:24:43.462021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:28.657 [2024-10-16 20:24:43.462035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.598 ms 00:17:28.657 [2024-10-16 20:24:43.462063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.657 [2024-10-16 20:24:43.462228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.657 [2024-10-16 20:24:43.462262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:28.657 [2024-10-16 20:24:43.462273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:28.657 [2024-10-16 20:24:43.462281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.657 [2024-10-16 20:24:43.488398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.657 [2024-10-16 20:24:43.488440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:28.657 [2024-10-16 20:24:43.488451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.099 ms 00:17:28.657 [2024-10-16 20:24:43.488458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.657 [2024-10-16 20:24:43.513522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.657 [2024-10-16 20:24:43.513699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:28.657 [2024-10-16 20:24:43.513720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.002 ms 00:17:28.657 [2024-10-16 20:24:43.513728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.657 [2024-10-16 20:24:43.539133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.657 [2024-10-16 20:24:43.539187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:28.657 [2024-10-16 20:24:43.539201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.035 ms 00:17:28.657 [2024-10-16 20:24:43.539208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.657 [2024-10-16 20:24:43.564065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.657 [2024-10-16 20:24:43.564242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:28.657 [2024-10-16 20:24:43.564263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.760 ms 00:17:28.657 [2024-10-16 20:24:43.564272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.657 [2024-10-16 20:24:43.564326] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:28.657 [2024-10-16 20:24:43.564342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:28.657 [2024-10-16 20:24:43.564777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.564996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:28.658 [2024-10-16 20:24:43.565149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:28.658 [2024-10-16 20:24:43.565157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71d3b1be-a432-41c7-ac06-b432fb201faa 00:17:28.658 [2024-10-16 20:24:43.565165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:28.658 [2024-10-16 20:24:43.565173] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:28.658 [2024-10-16 20:24:43.565181] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:28.658 [2024-10-16 20:24:43.565190] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:28.658 [2024-10-16 20:24:43.565200] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:28.658 [2024-10-16 20:24:43.565209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:28.658 [2024-10-16 20:24:43.565226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:28.658 [2024-10-16 20:24:43.565233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:28.658 [2024-10-16 20:24:43.565240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:28.658 [2024-10-16 20:24:43.565247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.658 [2024-10-16 20:24:43.565256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:28.658 [2024-10-16 20:24:43.565265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:17:28.658 [2024-10-16 20:24:43.565273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.658 [2024-10-16 20:24:43.578541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.658 [2024-10-16 20:24:43.578581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:28.658 [2024-10-16 20:24:43.578599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.234 ms 00:17:28.658 [2024-10-16 20:24:43.578606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.658 [2024-10-16 20:24:43.578843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.658 [2024-10-16 20:24:43.578853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:28.658 [2024-10-16 20:24:43.578862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:17:28.658 [2024-10-16 20:24:43.578869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.919 [2024-10-16 20:24:43.620164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.919 [2024-10-16 20:24:43.620218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.919 [2024-10-16 20:24:43.620228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.919 [2024-10-16 20:24:43.620236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.919 [2024-10-16 20:24:43.620329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.919 [2024-10-16 20:24:43.620339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.919 [2024-10-16 20:24:43.620347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.919 [2024-10-16 20:24:43.620355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.919 [2024-10-16 20:24:43.620402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.919 [2024-10-16 20:24:43.620412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.919 [2024-10-16 20:24:43.620425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.919 [2024-10-16 20:24:43.620433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.919 [2024-10-16 20:24:43.620452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.919 [2024-10-16 20:24:43.620460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.919 [2024-10-16 20:24:43.620468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.919 [2024-10-16 20:24:43.620475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.919 [2024-10-16 20:24:43.701946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.919 [2024-10-16 20:24:43.702001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.919 [2024-10-16 20:24:43.702013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.919 [2024-10-16 20:24:43.702021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.919 [2024-10-16 20:24:43.734308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.919 [2024-10-16 20:24:43.734343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.919 [2024-10-16 20:24:43.734354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.919 [2024-10-16 20:24:43.734363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.919 [2024-10-16 20:24:43.734427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.919 [2024-10-16 20:24:43.734438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.919 [2024-10-16 20:24:43.734446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.919 [2024-10-16 20:24:43.734461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.919 [2024-10-16 20:24:43.734496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.919 [2024-10-16 20:24:43.734504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.919 [2024-10-16 20:24:43.734513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.919 [2024-10-16 20:24:43.734521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.919 [2024-10-16 20:24:43.734619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.919 [2024-10-16 20:24:43.734630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.919 [2024-10-16 20:24:43.734639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.919 [2024-10-16 20:24:43.734647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.919 [2024-10-16 20:24:43.734684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.919 [2024-10-16 20:24:43.734692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:28.920 [2024-10-16 20:24:43.734702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.920 [2024-10-16 20:24:43.734709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.920 [2024-10-16 20:24:43.734754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.920 [2024-10-16 20:24:43.734765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.920 [2024-10-16 20:24:43.734773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.920 [2024-10-16 20:24:43.734781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.920 [2024-10-16 20:24:43.734839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.920 [2024-10-16 20:24:43.734849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.920 [2024-10-16 20:24:43.734857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.920 [2024-10-16 20:24:43.734866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.920 [2024-10-16 20:24:43.735025] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.765 ms, result 0 00:17:29.864 00:17:29.864 00:17:29.864 20:24:44 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:30.437 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:30.437 20:24:45 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:30.437 20:24:45 -- ftl/trim.sh@109 -- # fio_kill 00:17:30.437 20:24:45 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:30.437 20:24:45 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.437 20:24:45 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:30.437 20:24:45 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:30.437 20:24:45 -- ftl/trim.sh@20 -- # killprocess 72585 00:17:30.437 Process with pid 72585 is not found 00:17:30.437 20:24:45 -- common/autotest_common.sh@926 -- # '[' -z 72585 ']' 00:17:30.437 20:24:45 -- common/autotest_common.sh@930 -- # kill -0 72585 00:17:30.437 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (72585) - No such process 00:17:30.437 20:24:45 -- common/autotest_common.sh@953 -- # echo 'Process with pid 72585 is not found' 00:17:30.437 00:17:30.437 real 1m13.994s 00:17:30.437 user 1m37.816s 00:17:30.437 sys 0m5.542s 00:17:30.437 20:24:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.437 20:24:45 -- common/autotest_common.sh@10 -- # set +x 00:17:30.437 ************************************ 00:17:30.437 END TEST ftl_trim 00:17:30.437 ************************************ 00:17:30.437 20:24:45 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:30.437 20:24:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:30.437 20:24:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:30.437 20:24:45 -- common/autotest_common.sh@10 -- # set +x 00:17:30.438 ************************************ 00:17:30.438 START TEST ftl_restore 00:17:30.438 ************************************ 00:17:30.438 20:24:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:30.698 * Looking for test storage... 00:17:30.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.698 20:24:45 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:30.698 20:24:45 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:30.698 20:24:45 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.698 20:24:45 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.698 20:24:45 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:30.699 20:24:45 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:30.699 20:24:45 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.699 20:24:45 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:30.699 20:24:45 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:30.699 20:24:45 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.699 20:24:45 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.699 20:24:45 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:30.699 20:24:45 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:30.699 20:24:45 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:30.699 20:24:45 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:30.699 20:24:45 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:30.699 20:24:45 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:30.699 20:24:45 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.699 20:24:45 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.699 20:24:45 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:30.699 20:24:45 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:30.699 20:24:45 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:30.699 20:24:45 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:30.699 20:24:45 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:30.699 20:24:45 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:30.699 20:24:45 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:30.699 20:24:45 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:30.699 20:24:45 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:30.699 20:24:45 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:30.699 20:24:45 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.699 20:24:45 -- ftl/restore.sh@13 -- # mktemp -d 00:17:30.699 20:24:45 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.4KMKAuWZeH 00:17:30.699 20:24:45 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:30.699 20:24:45 -- ftl/restore.sh@16 -- # case $opt in 00:17:30.699 20:24:45 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:30.699 20:24:45 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:30.699 20:24:45 -- ftl/restore.sh@23 -- # shift 2 00:17:30.699 20:24:45 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:30.699 20:24:45 -- ftl/restore.sh@25 -- # timeout=240 00:17:30.699 20:24:45 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:30.699 20:24:45 -- ftl/restore.sh@39 -- # svcpid=72896 00:17:30.699 20:24:45 -- ftl/restore.sh@41 -- # waitforlisten 72896 00:17:30.699 20:24:45 -- common/autotest_common.sh@819 -- # '[' -z 72896 ']' 00:17:30.699 20:24:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.699 20:24:45 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.699 20:24:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.699 20:24:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.699 20:24:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.699 20:24:45 -- common/autotest_common.sh@10 -- # set +x 00:17:30.699 [2024-10-16 20:24:45.543376] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:30.699 [2024-10-16 20:24:45.543766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72896 ] 00:17:30.959 [2024-10-16 20:24:45.698858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.219 [2024-10-16 20:24:45.920742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:31.219 [2024-10-16 20:24:45.921237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.162 20:24:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:32.162 20:24:47 -- common/autotest_common.sh@852 -- # return 0 00:17:32.162 20:24:47 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:32.162 20:24:47 -- ftl/common.sh@54 -- # local name=nvme0 00:17:32.162 20:24:47 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:32.162 20:24:47 -- ftl/common.sh@56 -- # local size=103424 00:17:32.162 20:24:47 -- ftl/common.sh@59 -- # local base_bdev 00:17:32.162 20:24:47 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:32.438 20:24:47 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:32.438 20:24:47 -- ftl/common.sh@62 -- # local base_size 00:17:32.438 20:24:47 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:32.438 20:24:47 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:17:32.438 20:24:47 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:32.438 20:24:47 -- common/autotest_common.sh@1359 -- # local bs 00:17:32.438 20:24:47 -- common/autotest_common.sh@1360 -- # local nb 00:17:32.438 20:24:47 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:32.700 20:24:47 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:32.700 { 00:17:32.700 "name": "nvme0n1", 00:17:32.700 "aliases": [ 00:17:32.700 "4d4c27fb-257e-4df3-ab82-9fdd465bafcd" 00:17:32.700 ], 00:17:32.700 "product_name": "NVMe disk", 00:17:32.700 "block_size": 4096, 00:17:32.700 "num_blocks": 1310720, 00:17:32.700 "uuid": "4d4c27fb-257e-4df3-ab82-9fdd465bafcd", 00:17:32.700 "assigned_rate_limits": { 00:17:32.700 "rw_ios_per_sec": 0, 00:17:32.700 "rw_mbytes_per_sec": 0, 00:17:32.700 "r_mbytes_per_sec": 0, 00:17:32.700 "w_mbytes_per_sec": 0 00:17:32.700 }, 00:17:32.700 "claimed": true, 00:17:32.700 "claim_type": "read_many_write_one", 00:17:32.700 "zoned": false, 00:17:32.700 "supported_io_types": { 00:17:32.700 "read": true, 00:17:32.700 "write": true, 00:17:32.700 "unmap": true, 00:17:32.700 "write_zeroes": true, 00:17:32.700 "flush": true, 00:17:32.700 "reset": true, 00:17:32.700 "compare": true, 00:17:32.700 "compare_and_write": false, 00:17:32.700 "abort": true, 00:17:32.700 "nvme_admin": true, 00:17:32.700 "nvme_io": true 00:17:32.700 }, 00:17:32.700 "driver_specific": { 00:17:32.700 "nvme": [ 00:17:32.700 { 00:17:32.700 "pci_address": "0000:00:07.0", 00:17:32.700 "trid": { 00:17:32.700 "trtype": "PCIe", 00:17:32.700 "traddr": "0000:00:07.0" 00:17:32.700 }, 00:17:32.700 "ctrlr_data": { 00:17:32.701 "cntlid": 0, 00:17:32.701 "vendor_id": "0x1b36", 00:17:32.701 "model_number": "QEMU NVMe Ctrl", 00:17:32.701 "serial_number": "12341", 00:17:32.701 "firmware_revision": "8.0.0", 00:17:32.701 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:32.701 "oacs": { 00:17:32.701 "security": 0, 00:17:32.701 "format": 1, 00:17:32.701 "firmware": 0, 00:17:32.701 "ns_manage": 1 00:17:32.701 }, 00:17:32.701 "multi_ctrlr": false, 00:17:32.701 "ana_reporting": false 00:17:32.701 }, 00:17:32.701 "vs": { 00:17:32.701 "nvme_version": "1.4" 00:17:32.701 }, 00:17:32.701 "ns_data": { 00:17:32.701 "id": 1, 00:17:32.701 "can_share": false 00:17:32.701 } 00:17:32.701 } 00:17:32.701 ], 00:17:32.701 "mp_policy": "active_passive" 00:17:32.701 } 00:17:32.701 } 00:17:32.701 ]' 00:17:32.701 20:24:47 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:32.701 20:24:47 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:32.701 20:24:47 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:32.701 20:24:47 -- common/autotest_common.sh@1363 -- # nb=1310720 00:17:32.701 20:24:47 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:17:32.701 20:24:47 -- common/autotest_common.sh@1367 -- # echo 5120 00:17:32.701 20:24:47 -- ftl/common.sh@63 -- # base_size=5120 00:17:32.701 20:24:47 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:32.701 20:24:47 -- ftl/common.sh@67 -- # clear_lvols 00:17:32.701 20:24:47 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:32.701 20:24:47 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:32.962 20:24:47 -- ftl/common.sh@28 -- # stores=10814edb-d737-483b-84ad-c8bbb50843e3 00:17:32.962 20:24:47 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:32.962 20:24:47 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 10814edb-d737-483b-84ad-c8bbb50843e3 00:17:33.223 20:24:47 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:33.484 20:24:48 -- ftl/common.sh@68 -- # lvs=54d14d06-c2a6-45f4-bde8-7aa430cd5691 00:17:33.484 20:24:48 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 54d14d06-c2a6-45f4-bde8-7aa430cd5691 00:17:33.484 20:24:48 -- ftl/restore.sh@43 -- # split_bdev=2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:33.484 20:24:48 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:33.484 20:24:48 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:33.484 20:24:48 -- ftl/common.sh@35 -- # local name=nvc0 00:17:33.484 20:24:48 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:33.484 20:24:48 -- ftl/common.sh@37 -- # local base_bdev=2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:33.484 20:24:48 -- ftl/common.sh@38 -- # local cache_size= 00:17:33.484 20:24:48 -- ftl/common.sh@41 -- # get_bdev_size 2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:33.484 20:24:48 -- common/autotest_common.sh@1357 -- # local bdev_name=2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:33.484 20:24:48 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:33.484 20:24:48 -- common/autotest_common.sh@1359 -- # local bs 00:17:33.484 20:24:48 -- common/autotest_common.sh@1360 -- # local nb 00:17:33.484 20:24:48 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:33.745 20:24:48 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:33.745 { 00:17:33.745 "name": "2d8a1f58-7a65-4113-b2b0-e5585b0b24da", 00:17:33.745 "aliases": [ 00:17:33.745 "lvs/nvme0n1p0" 00:17:33.745 ], 00:17:33.745 "product_name": "Logical Volume", 00:17:33.745 "block_size": 4096, 00:17:33.745 "num_blocks": 26476544, 00:17:33.745 "uuid": "2d8a1f58-7a65-4113-b2b0-e5585b0b24da", 00:17:33.745 "assigned_rate_limits": { 00:17:33.745 "rw_ios_per_sec": 0, 00:17:33.745 "rw_mbytes_per_sec": 0, 00:17:33.745 "r_mbytes_per_sec": 0, 00:17:33.745 "w_mbytes_per_sec": 0 00:17:33.745 }, 00:17:33.745 "claimed": false, 00:17:33.745 "zoned": false, 00:17:33.745 "supported_io_types": { 00:17:33.745 "read": true, 00:17:33.745 "write": true, 00:17:33.745 "unmap": true, 00:17:33.745 "write_zeroes": true, 00:17:33.745 "flush": false, 00:17:33.745 "reset": true, 00:17:33.745 "compare": false, 00:17:33.745 "compare_and_write": false, 00:17:33.745 "abort": false, 00:17:33.745 "nvme_admin": false, 00:17:33.745 "nvme_io": false 00:17:33.745 }, 00:17:33.745 "driver_specific": { 00:17:33.745 "lvol": { 00:17:33.745 "lvol_store_uuid": "54d14d06-c2a6-45f4-bde8-7aa430cd5691", 00:17:33.745 "base_bdev": "nvme0n1", 00:17:33.745 "thin_provision": true, 00:17:33.745 "snapshot": false, 00:17:33.745 "clone": false, 00:17:33.745 "esnap_clone": false 00:17:33.745 } 00:17:33.745 } 00:17:33.745 } 00:17:33.745 ]' 00:17:33.745 20:24:48 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:33.745 20:24:48 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:33.745 20:24:48 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:33.745 20:24:48 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:33.745 20:24:48 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:33.745 20:24:48 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:33.745 20:24:48 -- ftl/common.sh@41 -- # local base_size=5171 00:17:33.745 20:24:48 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:33.745 20:24:48 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:34.007 20:24:48 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:34.007 20:24:48 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:34.007 20:24:48 -- ftl/common.sh@48 -- # get_bdev_size 2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:34.007 20:24:48 -- common/autotest_common.sh@1357 -- # local bdev_name=2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:34.007 20:24:48 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:34.007 20:24:48 -- common/autotest_common.sh@1359 -- # local bs 00:17:34.007 20:24:48 -- common/autotest_common.sh@1360 -- # local nb 00:17:34.007 20:24:48 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:34.267 20:24:49 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:34.267 { 00:17:34.267 "name": "2d8a1f58-7a65-4113-b2b0-e5585b0b24da", 00:17:34.267 "aliases": [ 00:17:34.267 "lvs/nvme0n1p0" 00:17:34.267 ], 00:17:34.267 "product_name": "Logical Volume", 00:17:34.267 "block_size": 4096, 00:17:34.267 "num_blocks": 26476544, 00:17:34.267 "uuid": "2d8a1f58-7a65-4113-b2b0-e5585b0b24da", 00:17:34.267 "assigned_rate_limits": { 00:17:34.267 "rw_ios_per_sec": 0, 00:17:34.267 "rw_mbytes_per_sec": 0, 00:17:34.267 "r_mbytes_per_sec": 0, 00:17:34.267 "w_mbytes_per_sec": 0 00:17:34.267 }, 00:17:34.267 "claimed": false, 00:17:34.267 "zoned": false, 00:17:34.267 "supported_io_types": { 00:17:34.267 "read": true, 00:17:34.267 "write": true, 00:17:34.267 "unmap": true, 00:17:34.267 "write_zeroes": true, 00:17:34.267 "flush": false, 00:17:34.267 "reset": true, 00:17:34.267 "compare": false, 00:17:34.267 "compare_and_write": false, 00:17:34.267 "abort": false, 00:17:34.267 "nvme_admin": false, 00:17:34.267 "nvme_io": false 00:17:34.267 }, 00:17:34.267 "driver_specific": { 00:17:34.267 "lvol": { 00:17:34.267 "lvol_store_uuid": "54d14d06-c2a6-45f4-bde8-7aa430cd5691", 00:17:34.267 "base_bdev": "nvme0n1", 00:17:34.267 "thin_provision": true, 00:17:34.267 "snapshot": false, 00:17:34.267 "clone": false, 00:17:34.267 "esnap_clone": false 00:17:34.268 } 00:17:34.268 } 00:17:34.268 } 00:17:34.268 ]' 00:17:34.268 20:24:49 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:34.268 20:24:49 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:34.268 20:24:49 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:34.268 20:24:49 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:34.268 20:24:49 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:34.268 20:24:49 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:34.268 20:24:49 -- ftl/common.sh@48 -- # cache_size=5171 00:17:34.268 20:24:49 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:34.528 20:24:49 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:34.528 20:24:49 -- ftl/restore.sh@48 -- # get_bdev_size 2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:34.528 20:24:49 -- common/autotest_common.sh@1357 -- # local bdev_name=2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:34.528 20:24:49 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:34.528 20:24:49 -- common/autotest_common.sh@1359 -- # local bs 00:17:34.528 20:24:49 -- common/autotest_common.sh@1360 -- # local nb 00:17:34.528 20:24:49 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d8a1f58-7a65-4113-b2b0-e5585b0b24da 00:17:34.790 20:24:49 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:34.790 { 00:17:34.790 "name": "2d8a1f58-7a65-4113-b2b0-e5585b0b24da", 00:17:34.790 "aliases": [ 00:17:34.790 "lvs/nvme0n1p0" 00:17:34.790 ], 00:17:34.790 "product_name": "Logical Volume", 00:17:34.790 "block_size": 4096, 00:17:34.790 "num_blocks": 26476544, 00:17:34.790 "uuid": "2d8a1f58-7a65-4113-b2b0-e5585b0b24da", 00:17:34.790 "assigned_rate_limits": { 00:17:34.790 "rw_ios_per_sec": 0, 00:17:34.790 "rw_mbytes_per_sec": 0, 00:17:34.790 "r_mbytes_per_sec": 0, 00:17:34.790 "w_mbytes_per_sec": 0 00:17:34.790 }, 00:17:34.790 "claimed": false, 00:17:34.790 "zoned": false, 00:17:34.790 "supported_io_types": { 00:17:34.790 "read": true, 00:17:34.790 "write": true, 00:17:34.790 "unmap": true, 00:17:34.790 "write_zeroes": true, 00:17:34.790 "flush": false, 00:17:34.790 "reset": true, 00:17:34.790 "compare": false, 00:17:34.790 "compare_and_write": false, 00:17:34.790 "abort": false, 00:17:34.790 "nvme_admin": false, 00:17:34.790 "nvme_io": false 00:17:34.790 }, 00:17:34.790 "driver_specific": { 00:17:34.790 "lvol": { 00:17:34.790 "lvol_store_uuid": "54d14d06-c2a6-45f4-bde8-7aa430cd5691", 00:17:34.790 "base_bdev": "nvme0n1", 00:17:34.790 "thin_provision": true, 00:17:34.790 "snapshot": false, 00:17:34.790 "clone": false, 00:17:34.790 "esnap_clone": false 00:17:34.790 } 00:17:34.790 } 00:17:34.790 } 00:17:34.790 ]' 00:17:34.790 20:24:49 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:34.790 20:24:49 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:34.791 20:24:49 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:34.791 20:24:49 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:34.791 20:24:49 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:34.791 20:24:49 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:34.791 20:24:49 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:34.791 20:24:49 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2d8a1f58-7a65-4113-b2b0-e5585b0b24da --l2p_dram_limit 10' 00:17:34.791 20:24:49 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:34.791 20:24:49 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:34.791 20:24:49 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:34.791 20:24:49 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:34.791 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:34.791 20:24:49 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2d8a1f58-7a65-4113-b2b0-e5585b0b24da --l2p_dram_limit 10 -c nvc0n1p0 00:17:34.791 [2024-10-16 20:24:49.688910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.688950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:34.791 [2024-10-16 20:24:49.688961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:34.791 [2024-10-16 20:24:49.688969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.689010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.689017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.791 [2024-10-16 20:24:49.689025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:34.791 [2024-10-16 20:24:49.689031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.689059] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:34.791 [2024-10-16 20:24:49.689665] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:34.791 [2024-10-16 20:24:49.689684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.689691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.791 [2024-10-16 20:24:49.689699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:17:34.791 [2024-10-16 20:24:49.689704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.689729] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d899f785-ee45-4182-91c8-1d95f73f7d3d 00:17:34.791 [2024-10-16 20:24:49.690668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.690691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:34.791 [2024-10-16 20:24:49.690699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:34.791 [2024-10-16 20:24:49.690705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.695344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.695370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.791 [2024-10-16 20:24:49.695378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.604 ms 00:17:34.791 [2024-10-16 20:24:49.695386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.695484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.695493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.791 [2024-10-16 20:24:49.695500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:34.791 [2024-10-16 20:24:49.695509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.695543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.695554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:34.791 [2024-10-16 20:24:49.695559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:34.791 [2024-10-16 20:24:49.695566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.695586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:34.791 [2024-10-16 20:24:49.698521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.698626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.791 [2024-10-16 20:24:49.698641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:17:34.791 [2024-10-16 20:24:49.698647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.698678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.698685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:34.791 [2024-10-16 20:24:49.698692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:34.791 [2024-10-16 20:24:49.698698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.698712] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:34.791 [2024-10-16 20:24:49.698799] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:34.791 [2024-10-16 20:24:49.698811] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:34.791 [2024-10-16 20:24:49.698819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:34.791 [2024-10-16 20:24:49.698828] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:34.791 [2024-10-16 20:24:49.698835] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:34.791 [2024-10-16 20:24:49.698844] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:34.791 [2024-10-16 20:24:49.698856] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:34.791 [2024-10-16 20:24:49.698862] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:34.791 [2024-10-16 20:24:49.698868] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:34.791 [2024-10-16 20:24:49.698874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.698880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:34.791 [2024-10-16 20:24:49.698887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:17:34.791 [2024-10-16 20:24:49.698893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.698941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.791 [2024-10-16 20:24:49.698947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:34.791 [2024-10-16 20:24:49.698955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:34.791 [2024-10-16 20:24:49.698961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.791 [2024-10-16 20:24:49.699018] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:34.791 [2024-10-16 20:24:49.699025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:34.791 [2024-10-16 20:24:49.699033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.791 [2024-10-16 20:24:49.699038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.791 [2024-10-16 20:24:49.699060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:34.791 [2024-10-16 20:24:49.699065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:34.791 [2024-10-16 20:24:49.699071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:34.791 [2024-10-16 20:24:49.699077] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:34.791 [2024-10-16 20:24:49.699083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:34.791 [2024-10-16 20:24:49.699088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.791 [2024-10-16 20:24:49.699096] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:34.791 [2024-10-16 20:24:49.699101] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:34.791 [2024-10-16 20:24:49.699109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.791 [2024-10-16 20:24:49.699115] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:34.791 [2024-10-16 20:24:49.699121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:34.791 [2024-10-16 20:24:49.699126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.791 [2024-10-16 20:24:49.699134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:34.791 [2024-10-16 20:24:49.699140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:34.791 [2024-10-16 20:24:49.699146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.791 [2024-10-16 20:24:49.699151] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:34.791 [2024-10-16 20:24:49.699157] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:34.791 [2024-10-16 20:24:49.699162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:34.791 [2024-10-16 20:24:49.699169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:34.791 [2024-10-16 20:24:49.699174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:34.791 [2024-10-16 20:24:49.699180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.791 [2024-10-16 20:24:49.699185] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:34.791 [2024-10-16 20:24:49.699191] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:34.791 [2024-10-16 20:24:49.699196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.791 [2024-10-16 20:24:49.699202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:34.791 [2024-10-16 20:24:49.699206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:34.791 [2024-10-16 20:24:49.699212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.791 [2024-10-16 20:24:49.699217] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:34.791 [2024-10-16 20:24:49.699224] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:34.791 [2024-10-16 20:24:49.699229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.791 [2024-10-16 20:24:49.699235] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:34.791 [2024-10-16 20:24:49.699241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:34.791 [2024-10-16 20:24:49.699247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.791 [2024-10-16 20:24:49.699252] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:34.791 [2024-10-16 20:24:49.699259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:34.791 [2024-10-16 20:24:49.699264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.791 [2024-10-16 20:24:49.699270] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:34.791 [2024-10-16 20:24:49.699275] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:34.792 [2024-10-16 20:24:49.699282] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.792 [2024-10-16 20:24:49.699287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.792 [2024-10-16 20:24:49.699297] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:34.792 [2024-10-16 20:24:49.699303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:34.792 [2024-10-16 20:24:49.699309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:34.792 [2024-10-16 20:24:49.699315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:34.792 [2024-10-16 20:24:49.699323] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:34.792 [2024-10-16 20:24:49.699327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:34.792 [2024-10-16 20:24:49.699334] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:34.792 [2024-10-16 20:24:49.699342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.792 [2024-10-16 20:24:49.699350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:34.792 [2024-10-16 20:24:49.699355] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:34.792 [2024-10-16 20:24:49.699362] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:34.792 [2024-10-16 20:24:49.699367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:34.792 [2024-10-16 20:24:49.699373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:34.792 [2024-10-16 20:24:49.699379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:34.792 [2024-10-16 20:24:49.699385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:34.792 [2024-10-16 20:24:49.699390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:34.792 [2024-10-16 20:24:49.699397] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:34.792 [2024-10-16 20:24:49.699402] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:34.792 [2024-10-16 20:24:49.699408] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:34.792 [2024-10-16 20:24:49.699414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:34.792 [2024-10-16 20:24:49.699423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:34.792 [2024-10-16 20:24:49.699429] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:34.792 [2024-10-16 20:24:49.699436] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.792 [2024-10-16 20:24:49.699442] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:34.792 [2024-10-16 20:24:49.699448] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:34.792 [2024-10-16 20:24:49.699454] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:34.792 [2024-10-16 20:24:49.699460] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:34.792 [2024-10-16 20:24:49.699466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.792 [2024-10-16 20:24:49.699473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:34.792 [2024-10-16 20:24:49.699478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:17:34.792 [2024-10-16 20:24:49.699485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.792 [2024-10-16 20:24:49.711494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.792 [2024-10-16 20:24:49.711591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.792 [2024-10-16 20:24:49.711632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.966 ms 00:17:34.792 [2024-10-16 20:24:49.711652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.792 [2024-10-16 20:24:49.711730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.792 [2024-10-16 20:24:49.711783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:34.792 [2024-10-16 20:24:49.711819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:34.792 [2024-10-16 20:24:49.711835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.053 [2024-10-16 20:24:49.735659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.053 [2024-10-16 20:24:49.735754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.053 [2024-10-16 20:24:49.735797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.783 ms 00:17:35.053 [2024-10-16 20:24:49.735817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.053 [2024-10-16 20:24:49.735854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.053 [2024-10-16 20:24:49.735871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.053 [2024-10-16 20:24:49.735886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:35.053 [2024-10-16 20:24:49.735902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.053 [2024-10-16 20:24:49.736218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.053 [2024-10-16 20:24:49.736250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.053 [2024-10-16 20:24:49.736267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:17:35.053 [2024-10-16 20:24:49.736282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.053 [2024-10-16 20:24:49.736375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.053 [2024-10-16 20:24:49.736445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.053 [2024-10-16 20:24:49.736460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:35.053 [2024-10-16 20:24:49.736475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.053 [2024-10-16 20:24:49.748362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.053 [2024-10-16 20:24:49.748446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.053 [2024-10-16 20:24:49.748483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.802 ms 00:17:35.053 [2024-10-16 20:24:49.748502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.053 [2024-10-16 20:24:49.758244] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:35.053 [2024-10-16 20:24:49.760647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.053 [2024-10-16 20:24:49.760723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:35.053 [2024-10-16 20:24:49.760761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.080 ms 00:17:35.053 [2024-10-16 20:24:49.760778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.053 [2024-10-16 20:24:49.829729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.053 [2024-10-16 20:24:49.829864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:35.053 [2024-10-16 20:24:49.829925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.918 ms 00:17:35.053 [2024-10-16 20:24:49.829949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.053 [2024-10-16 20:24:49.830155] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:35.053 [2024-10-16 20:24:49.830195] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:39.263 [2024-10-16 20:24:53.353787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.263 [2024-10-16 20:24:53.354102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:39.263 [2024-10-16 20:24:53.354215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3523.615 ms 00:17:39.263 [2024-10-16 20:24:53.354244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.263 [2024-10-16 20:24:53.354445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.263 [2024-10-16 20:24:53.354603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:39.263 [2024-10-16 20:24:53.354633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:39.263 [2024-10-16 20:24:53.354649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.263 [2024-10-16 20:24:53.375615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.263 [2024-10-16 20:24:53.375778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:39.263 [2024-10-16 20:24:53.375841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.898 ms 00:17:39.263 [2024-10-16 20:24:53.375860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.263 [2024-10-16 20:24:53.394726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.263 [2024-10-16 20:24:53.394858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:39.264 [2024-10-16 20:24:53.394929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.802 ms 00:17:39.264 [2024-10-16 20:24:53.394945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.395235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.395297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:39.264 [2024-10-16 20:24:53.395344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:17:39.264 [2024-10-16 20:24:53.395362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.451520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.451636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:39.264 [2024-10-16 20:24:53.451690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.100 ms 00:17:39.264 [2024-10-16 20:24:53.451708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.470875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.470978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:39.264 [2024-10-16 20:24:53.471020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.134 ms 00:17:39.264 [2024-10-16 20:24:53.471037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.472028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.472131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:39.264 [2024-10-16 20:24:53.472175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:17:39.264 [2024-10-16 20:24:53.472193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.490230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.490326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:39.264 [2024-10-16 20:24:53.490369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.996 ms 00:17:39.264 [2024-10-16 20:24:53.490386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.490421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.490437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:39.264 [2024-10-16 20:24:53.490453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:39.264 [2024-10-16 20:24:53.490467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.490548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.490568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:39.264 [2024-10-16 20:24:53.490584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:39.264 [2024-10-16 20:24:53.490598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.491439] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3802.189 ms, result 0 00:17:39.264 { 00:17:39.264 "name": "ftl0", 00:17:39.264 "uuid": "d899f785-ee45-4182-91c8-1d95f73f7d3d" 00:17:39.264 } 00:17:39.264 20:24:53 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:39.264 20:24:53 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:39.264 20:24:53 -- ftl/restore.sh@63 -- # echo ']}' 00:17:39.264 20:24:53 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:39.264 [2024-10-16 20:24:53.870842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.870884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:39.264 [2024-10-16 20:24:53.870894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:39.264 [2024-10-16 20:24:53.870901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.870920] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.264 [2024-10-16 20:24:53.873092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.873206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:39.264 [2024-10-16 20:24:53.873224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.158 ms 00:17:39.264 [2024-10-16 20:24:53.873235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.873441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.873448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:39.264 [2024-10-16 20:24:53.873456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:17:39.264 [2024-10-16 20:24:53.873462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.875897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.875913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:39.264 [2024-10-16 20:24:53.875922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:17:39.264 [2024-10-16 20:24:53.875928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.880570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.880665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:39.264 [2024-10-16 20:24:53.880679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.624 ms 00:17:39.264 [2024-10-16 20:24:53.880685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.899240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.899325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:39.264 [2024-10-16 20:24:53.899366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.495 ms 00:17:39.264 [2024-10-16 20:24:53.899383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.911650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.911741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:39.264 [2024-10-16 20:24:53.911783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.230 ms 00:17:39.264 [2024-10-16 20:24:53.911800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.911911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.911931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:39.264 [2024-10-16 20:24:53.911948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:39.264 [2024-10-16 20:24:53.911964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.929981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.930086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:39.264 [2024-10-16 20:24:53.930129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.991 ms 00:17:39.264 [2024-10-16 20:24:53.930147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.947616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.947703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:39.264 [2024-10-16 20:24:53.947743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.434 ms 00:17:39.264 [2024-10-16 20:24:53.947760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.965002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.965096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:39.264 [2024-10-16 20:24:53.965161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.208 ms 00:17:39.264 [2024-10-16 20:24:53.965179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.982504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.264 [2024-10-16 20:24:53.982632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:39.264 [2024-10-16 20:24:53.982679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.252 ms 00:17:39.264 [2024-10-16 20:24:53.982696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.264 [2024-10-16 20:24:53.982732] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:39.264 [2024-10-16 20:24:53.982755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.982780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.982802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.982825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.982881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.982933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.982956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.982979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.983997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.984980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.985004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.985028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.985063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.985091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:39.265 [2024-10-16 20:24:53.985198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:39.266 [2024-10-16 20:24:53.985993] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:39.266 [2024-10-16 20:24:53.986011] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d899f785-ee45-4182-91c8-1d95f73f7d3d 00:17:39.266 [2024-10-16 20:24:53.986092] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:39.266 [2024-10-16 20:24:53.986109] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:39.266 [2024-10-16 20:24:53.986124] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:39.266 [2024-10-16 20:24:53.986140] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:39.266 [2024-10-16 20:24:53.986177] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:39.266 [2024-10-16 20:24:53.986196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:39.266 [2024-10-16 20:24:53.986211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:39.266 [2024-10-16 20:24:53.986226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:39.266 [2024-10-16 20:24:53.986270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:39.266 [2024-10-16 20:24:53.986311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.266 [2024-10-16 20:24:53.986329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:39.266 [2024-10-16 20:24:53.986394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.579 ms 00:17:39.266 [2024-10-16 20:24:53.986412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.266 [2024-10-16 20:24:53.996524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.266 [2024-10-16 20:24:53.996671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:39.266 [2024-10-16 20:24:53.996685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.071 ms 00:17:39.266 [2024-10-16 20:24:53.996691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.266 [2024-10-16 20:24:53.996850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.266 [2024-10-16 20:24:53.996858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:39.266 [2024-10-16 20:24:53.996866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:39.266 [2024-10-16 20:24:53.996871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.266 [2024-10-16 20:24:54.032373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.266 [2024-10-16 20:24:54.032399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.266 [2024-10-16 20:24:54.032408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.266 [2024-10-16 20:24:54.032415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.266 [2024-10-16 20:24:54.032461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.266 [2024-10-16 20:24:54.032469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.266 [2024-10-16 20:24:54.032476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.266 [2024-10-16 20:24:54.032481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.266 [2024-10-16 20:24:54.032531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.266 [2024-10-16 20:24:54.032539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.266 [2024-10-16 20:24:54.032546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.266 [2024-10-16 20:24:54.032552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.266 [2024-10-16 20:24:54.032566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.266 [2024-10-16 20:24:54.032572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.266 [2024-10-16 20:24:54.032581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.266 [2024-10-16 20:24:54.032586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.266 [2024-10-16 20:24:54.091190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.266 [2024-10-16 20:24:54.091218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.266 [2024-10-16 20:24:54.091228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.266 [2024-10-16 20:24:54.091234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.266 [2024-10-16 20:24:54.113413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.266 [2024-10-16 20:24:54.113438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.266 [2024-10-16 20:24:54.113446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.266 [2024-10-16 20:24:54.113452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.266 [2024-10-16 20:24:54.113499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.266 [2024-10-16 20:24:54.113506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.266 [2024-10-16 20:24:54.113521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.266 [2024-10-16 20:24:54.113527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.266 [2024-10-16 20:24:54.113562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.266 [2024-10-16 20:24:54.113569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.266 [2024-10-16 20:24:54.113576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.267 [2024-10-16 20:24:54.113583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.267 [2024-10-16 20:24:54.113653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.267 [2024-10-16 20:24:54.113661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.267 [2024-10-16 20:24:54.113668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.267 [2024-10-16 20:24:54.113673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.267 [2024-10-16 20:24:54.113701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.267 [2024-10-16 20:24:54.113708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.267 [2024-10-16 20:24:54.113715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.267 [2024-10-16 20:24:54.113721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.267 [2024-10-16 20:24:54.113752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.267 [2024-10-16 20:24:54.113758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.267 [2024-10-16 20:24:54.113766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.267 [2024-10-16 20:24:54.113771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.267 [2024-10-16 20:24:54.113807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.267 [2024-10-16 20:24:54.113814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.267 [2024-10-16 20:24:54.113821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.267 [2024-10-16 20:24:54.113828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.267 [2024-10-16 20:24:54.113926] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 243.054 ms, result 0 00:17:39.267 true 00:17:39.267 20:24:54 -- ftl/restore.sh@66 -- # killprocess 72896 00:17:39.267 20:24:54 -- common/autotest_common.sh@926 -- # '[' -z 72896 ']' 00:17:39.267 20:24:54 -- common/autotest_common.sh@930 -- # kill -0 72896 00:17:39.267 20:24:54 -- common/autotest_common.sh@931 -- # uname 00:17:39.267 20:24:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.267 20:24:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72896 00:17:39.267 20:24:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:39.267 20:24:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:39.267 killing process with pid 72896 00:17:39.267 20:24:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72896' 00:17:39.267 20:24:54 -- common/autotest_common.sh@945 -- # kill 72896 00:17:39.267 20:24:54 -- common/autotest_common.sh@950 -- # wait 72896 00:17:45.857 20:24:59 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:48.418 262144+0 records in 00:17:48.418 262144+0 records out 00:17:48.418 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.45336 s, 311 MB/s 00:17:48.418 20:25:03 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:50.988 20:25:05 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.988 [2024-10-16 20:25:05.385389] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:50.988 [2024-10-16 20:25:05.385470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73151 ] 00:17:50.988 [2024-10-16 20:25:05.529411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.988 [2024-10-16 20:25:05.736663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.249 [2024-10-16 20:25:06.026847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.249 [2024-10-16 20:25:06.026936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.512 [2024-10-16 20:25:06.182792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.182862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:51.512 [2024-10-16 20:25:06.182876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:51.512 [2024-10-16 20:25:06.182888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.182943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.182955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.512 [2024-10-16 20:25:06.182963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:51.512 [2024-10-16 20:25:06.182971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.182990] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:51.512 [2024-10-16 20:25:06.183822] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:51.512 [2024-10-16 20:25:06.183850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.183859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.512 [2024-10-16 20:25:06.183869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:17:51.512 [2024-10-16 20:25:06.183877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.185634] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:51.512 [2024-10-16 20:25:06.200510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.200563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:51.512 [2024-10-16 20:25:06.200578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.877 ms 00:17:51.512 [2024-10-16 20:25:06.200586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.200671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.200681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:51.512 [2024-10-16 20:25:06.200691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:51.512 [2024-10-16 20:25:06.200699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.209394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.209439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.512 [2024-10-16 20:25:06.209449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.605 ms 00:17:51.512 [2024-10-16 20:25:06.209458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.209569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.209579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.512 [2024-10-16 20:25:06.209588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:51.512 [2024-10-16 20:25:06.209597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.209645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.209655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:51.512 [2024-10-16 20:25:06.209664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:51.512 [2024-10-16 20:25:06.209671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.209702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.512 [2024-10-16 20:25:06.213937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.213979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.512 [2024-10-16 20:25:06.213990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.248 ms 00:17:51.512 [2024-10-16 20:25:06.213999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.214038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.214058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:51.512 [2024-10-16 20:25:06.214067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:51.512 [2024-10-16 20:25:06.214078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.214132] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:51.512 [2024-10-16 20:25:06.214154] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:51.512 [2024-10-16 20:25:06.214191] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:51.512 [2024-10-16 20:25:06.214207] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:51.512 [2024-10-16 20:25:06.214283] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:51.512 [2024-10-16 20:25:06.214294] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:51.512 [2024-10-16 20:25:06.214308] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:51.512 [2024-10-16 20:25:06.214319] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:51.512 [2024-10-16 20:25:06.214328] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:51.512 [2024-10-16 20:25:06.214336] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:51.512 [2024-10-16 20:25:06.214343] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:51.512 [2024-10-16 20:25:06.214351] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:51.512 [2024-10-16 20:25:06.214358] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:51.512 [2024-10-16 20:25:06.214367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.214374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:51.512 [2024-10-16 20:25:06.214382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:17:51.512 [2024-10-16 20:25:06.214390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.214456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.512 [2024-10-16 20:25:06.214466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:51.512 [2024-10-16 20:25:06.214473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:51.512 [2024-10-16 20:25:06.214480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.512 [2024-10-16 20:25:06.214550] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:51.512 [2024-10-16 20:25:06.214561] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:51.512 [2024-10-16 20:25:06.214569] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.512 [2024-10-16 20:25:06.214577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.512 [2024-10-16 20:25:06.214585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:51.512 [2024-10-16 20:25:06.214591] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:51.512 [2024-10-16 20:25:06.214598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:51.512 [2024-10-16 20:25:06.214606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:51.512 [2024-10-16 20:25:06.214613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:51.512 [2024-10-16 20:25:06.214619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.512 [2024-10-16 20:25:06.214628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:51.512 [2024-10-16 20:25:06.214635] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:51.512 [2024-10-16 20:25:06.214642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.512 [2024-10-16 20:25:06.214649] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:51.512 [2024-10-16 20:25:06.214656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:51.512 [2024-10-16 20:25:06.214663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.512 [2024-10-16 20:25:06.214678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:51.512 [2024-10-16 20:25:06.214685] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:51.512 [2024-10-16 20:25:06.214691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.512 [2024-10-16 20:25:06.214698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:51.512 [2024-10-16 20:25:06.214705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:51.512 [2024-10-16 20:25:06.214712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:51.512 [2024-10-16 20:25:06.214720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:51.512 [2024-10-16 20:25:06.214726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:51.512 [2024-10-16 20:25:06.214735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:51.512 [2024-10-16 20:25:06.214742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:51.512 [2024-10-16 20:25:06.214749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:51.513 [2024-10-16 20:25:06.214755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:51.513 [2024-10-16 20:25:06.214762] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:51.513 [2024-10-16 20:25:06.214768] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:51.513 [2024-10-16 20:25:06.214775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:51.513 [2024-10-16 20:25:06.214781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:51.513 [2024-10-16 20:25:06.214788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:51.513 [2024-10-16 20:25:06.214795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:51.513 [2024-10-16 20:25:06.214801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:51.513 [2024-10-16 20:25:06.214807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:51.513 [2024-10-16 20:25:06.214814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.513 [2024-10-16 20:25:06.214821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:51.513 [2024-10-16 20:25:06.214828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:51.513 [2024-10-16 20:25:06.214834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.513 [2024-10-16 20:25:06.214840] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:51.513 [2024-10-16 20:25:06.214850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:51.513 [2024-10-16 20:25:06.214861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.513 [2024-10-16 20:25:06.214869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.513 [2024-10-16 20:25:06.214877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:51.513 [2024-10-16 20:25:06.214884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:51.513 [2024-10-16 20:25:06.214891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:51.513 [2024-10-16 20:25:06.214897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:51.513 [2024-10-16 20:25:06.214904] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:51.513 [2024-10-16 20:25:06.214911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:51.513 [2024-10-16 20:25:06.214919] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:51.513 [2024-10-16 20:25:06.214930] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.513 [2024-10-16 20:25:06.214939] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:51.513 [2024-10-16 20:25:06.214946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:51.513 [2024-10-16 20:25:06.214953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:51.513 [2024-10-16 20:25:06.214960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:51.513 [2024-10-16 20:25:06.214968] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:51.513 [2024-10-16 20:25:06.214976] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:51.513 [2024-10-16 20:25:06.214983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:51.513 [2024-10-16 20:25:06.214990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:51.513 [2024-10-16 20:25:06.214998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:51.513 [2024-10-16 20:25:06.215005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:51.513 [2024-10-16 20:25:06.215012] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:51.513 [2024-10-16 20:25:06.215020] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:51.513 [2024-10-16 20:25:06.215027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:51.513 [2024-10-16 20:25:06.215034] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:51.513 [2024-10-16 20:25:06.215071] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.513 [2024-10-16 20:25:06.215081] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:51.513 [2024-10-16 20:25:06.215089] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:51.513 [2024-10-16 20:25:06.215096] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:51.513 [2024-10-16 20:25:06.215104] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:51.513 [2024-10-16 20:25:06.215112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.215120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:51.513 [2024-10-16 20:25:06.215127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:17:51.513 [2024-10-16 20:25:06.215136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.233284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.233337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.513 [2024-10-16 20:25:06.233350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.105 ms 00:17:51.513 [2024-10-16 20:25:06.233365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.233461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.233471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:51.513 [2024-10-16 20:25:06.233495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:51.513 [2024-10-16 20:25:06.233504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.284629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.284692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.513 [2024-10-16 20:25:06.284705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.068 ms 00:17:51.513 [2024-10-16 20:25:06.284714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.284766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.284776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.513 [2024-10-16 20:25:06.284785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:51.513 [2024-10-16 20:25:06.284793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.285427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.285470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.513 [2024-10-16 20:25:06.285495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:17:51.513 [2024-10-16 20:25:06.285510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.285641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.285652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.513 [2024-10-16 20:25:06.285661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:17:51.513 [2024-10-16 20:25:06.285669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.302555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.302605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.513 [2024-10-16 20:25:06.302616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.860 ms 00:17:51.513 [2024-10-16 20:25:06.302625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.317348] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:51.513 [2024-10-16 20:25:06.317404] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:51.513 [2024-10-16 20:25:06.317418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.317426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:51.513 [2024-10-16 20:25:06.317436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.679 ms 00:17:51.513 [2024-10-16 20:25:06.317443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.344520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.344586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:51.513 [2024-10-16 20:25:06.344599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.011 ms 00:17:51.513 [2024-10-16 20:25:06.344608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.358297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.358349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:51.513 [2024-10-16 20:25:06.358362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.631 ms 00:17:51.513 [2024-10-16 20:25:06.358369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.371588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.371643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:51.513 [2024-10-16 20:25:06.371667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.169 ms 00:17:51.513 [2024-10-16 20:25:06.371674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.513 [2024-10-16 20:25:06.372093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.513 [2024-10-16 20:25:06.372117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:51.513 [2024-10-16 20:25:06.372129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:17:51.513 [2024-10-16 20:25:06.372138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.775 [2024-10-16 20:25:06.441911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.775 [2024-10-16 20:25:06.441976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:51.775 [2024-10-16 20:25:06.441991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.754 ms 00:17:51.775 [2024-10-16 20:25:06.442000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.775 [2024-10-16 20:25:06.453810] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:51.775 [2024-10-16 20:25:06.457109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.775 [2024-10-16 20:25:06.457152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:51.775 [2024-10-16 20:25:06.457164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.020 ms 00:17:51.775 [2024-10-16 20:25:06.457173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.775 [2024-10-16 20:25:06.457258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.775 [2024-10-16 20:25:06.457268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:51.775 [2024-10-16 20:25:06.457277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:51.775 [2024-10-16 20:25:06.457285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.775 [2024-10-16 20:25:06.457352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.775 [2024-10-16 20:25:06.457363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:51.775 [2024-10-16 20:25:06.457372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:51.775 [2024-10-16 20:25:06.457379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.775 [2024-10-16 20:25:06.458768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.775 [2024-10-16 20:25:06.458819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:51.775 [2024-10-16 20:25:06.458830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:17:51.775 [2024-10-16 20:25:06.458839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.775 [2024-10-16 20:25:06.458878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.775 [2024-10-16 20:25:06.458887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:51.775 [2024-10-16 20:25:06.458896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:51.775 [2024-10-16 20:25:06.458909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.775 [2024-10-16 20:25:06.458946] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:51.775 [2024-10-16 20:25:06.458956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.775 [2024-10-16 20:25:06.458964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:51.775 [2024-10-16 20:25:06.458975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:51.775 [2024-10-16 20:25:06.458983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.775 [2024-10-16 20:25:06.486230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.775 [2024-10-16 20:25:06.486284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:51.775 [2024-10-16 20:25:06.486297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.227 ms 00:17:51.775 [2024-10-16 20:25:06.486305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.775 [2024-10-16 20:25:06.486394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.775 [2024-10-16 20:25:06.486412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:51.775 [2024-10-16 20:25:06.486421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:51.775 [2024-10-16 20:25:06.486429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.775 [2024-10-16 20:25:06.488089] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.784 ms, result 0 00:17:52.718  [2024-10-16T20:25:08.591Z] Copying: 12/1024 [MB] (12 MBps) [2024-10-16T20:25:09.535Z] Copying: 26/1024 [MB] (13 MBps) [2024-10-16T20:25:10.923Z] Copying: 49/1024 [MB] (23 MBps) [2024-10-16T20:25:11.866Z] Copying: 63/1024 [MB] (13 MBps) [2024-10-16T20:25:12.811Z] Copying: 85/1024 [MB] (22 MBps) [2024-10-16T20:25:13.753Z] Copying: 103/1024 [MB] (17 MBps) [2024-10-16T20:25:14.698Z] Copying: 121/1024 [MB] (18 MBps) [2024-10-16T20:25:15.642Z] Copying: 138/1024 [MB] (16 MBps) [2024-10-16T20:25:16.586Z] Copying: 162/1024 [MB] (24 MBps) [2024-10-16T20:25:17.529Z] Copying: 191/1024 [MB] (29 MBps) [2024-10-16T20:25:18.914Z] Copying: 221/1024 [MB] (29 MBps) [2024-10-16T20:25:19.857Z] Copying: 236/1024 [MB] (15 MBps) [2024-10-16T20:25:20.800Z] Copying: 250/1024 [MB] (13 MBps) [2024-10-16T20:25:21.803Z] Copying: 267/1024 [MB] (17 MBps) [2024-10-16T20:25:22.746Z] Copying: 292/1024 [MB] (24 MBps) [2024-10-16T20:25:23.690Z] Copying: 309/1024 [MB] (17 MBps) [2024-10-16T20:25:24.634Z] Copying: 329/1024 [MB] (19 MBps) [2024-10-16T20:25:25.577Z] Copying: 350/1024 [MB] (21 MBps) [2024-10-16T20:25:26.523Z] Copying: 369/1024 [MB] (18 MBps) [2024-10-16T20:25:27.910Z] Copying: 382/1024 [MB] (12 MBps) [2024-10-16T20:25:28.852Z] Copying: 399/1024 [MB] (16 MBps) [2024-10-16T20:25:29.796Z] Copying: 422/1024 [MB] (23 MBps) [2024-10-16T20:25:30.740Z] Copying: 450/1024 [MB] (27 MBps) [2024-10-16T20:25:31.685Z] Copying: 470/1024 [MB] (20 MBps) [2024-10-16T20:25:32.629Z] Copying: 490/1024 [MB] (19 MBps) [2024-10-16T20:25:33.574Z] Copying: 514/1024 [MB] (23 MBps) [2024-10-16T20:25:34.518Z] Copying: 527/1024 [MB] (13 MBps) [2024-10-16T20:25:35.904Z] Copying: 556/1024 [MB] (29 MBps) [2024-10-16T20:25:36.846Z] Copying: 586/1024 [MB] (29 MBps) [2024-10-16T20:25:37.802Z] Copying: 615/1024 [MB] (29 MBps) [2024-10-16T20:25:38.802Z] Copying: 645/1024 [MB] (29 MBps) [2024-10-16T20:25:39.747Z] Copying: 667/1024 [MB] (22 MBps) [2024-10-16T20:25:40.691Z] Copying: 678/1024 [MB] (10 MBps) [2024-10-16T20:25:41.636Z] Copying: 706/1024 [MB] (28 MBps) [2024-10-16T20:25:42.576Z] Copying: 718/1024 [MB] (11 MBps) [2024-10-16T20:25:43.522Z] Copying: 733/1024 [MB] (14 MBps) [2024-10-16T20:25:44.910Z] Copying: 746/1024 [MB] (13 MBps) [2024-10-16T20:25:45.854Z] Copying: 765/1024 [MB] (18 MBps) [2024-10-16T20:25:46.798Z] Copying: 786/1024 [MB] (20 MBps) [2024-10-16T20:25:47.748Z] Copying: 807/1024 [MB] (21 MBps) [2024-10-16T20:25:48.693Z] Copying: 829/1024 [MB] (21 MBps) [2024-10-16T20:25:49.637Z] Copying: 846/1024 [MB] (17 MBps) [2024-10-16T20:25:50.582Z] Copying: 872/1024 [MB] (25 MBps) [2024-10-16T20:25:51.525Z] Copying: 892/1024 [MB] (20 MBps) [2024-10-16T20:25:52.914Z] Copying: 912/1024 [MB] (20 MBps) [2024-10-16T20:25:53.858Z] Copying: 929/1024 [MB] (17 MBps) [2024-10-16T20:25:54.802Z] Copying: 951/1024 [MB] (21 MBps) [2024-10-16T20:25:55.754Z] Copying: 982/1024 [MB] (31 MBps) [2024-10-16T20:25:56.752Z] Copying: 1013/1024 [MB] (30 MBps) [2024-10-16T20:25:56.752Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-10-16 20:25:56.445709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.445768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:41.823 [2024-10-16 20:25:56.445783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:41.823 [2024-10-16 20:25:56.445791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.445813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:41.823 [2024-10-16 20:25:56.448788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.448832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:41.823 [2024-10-16 20:25:56.448850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.959 ms 00:18:41.823 [2024-10-16 20:25:56.448858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.451983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.452033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:41.823 [2024-10-16 20:25:56.452058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.098 ms 00:18:41.823 [2024-10-16 20:25:56.452067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.470139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.470189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:41.823 [2024-10-16 20:25:56.470201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.055 ms 00:18:41.823 [2024-10-16 20:25:56.470217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.476295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.476334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:41.823 [2024-10-16 20:25:56.476345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.039 ms 00:18:41.823 [2024-10-16 20:25:56.476352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.503027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.503082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:41.823 [2024-10-16 20:25:56.503093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.606 ms 00:18:41.823 [2024-10-16 20:25:56.503100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.519957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.520002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:41.823 [2024-10-16 20:25:56.520015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.812 ms 00:18:41.823 [2024-10-16 20:25:56.520024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.520200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.520213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:41.823 [2024-10-16 20:25:56.520222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:41.823 [2024-10-16 20:25:56.520231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.546426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.546468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:41.823 [2024-10-16 20:25:56.546480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.180 ms 00:18:41.823 [2024-10-16 20:25:56.546487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.572237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.572278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:41.823 [2024-10-16 20:25:56.572289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.707 ms 00:18:41.823 [2024-10-16 20:25:56.572307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.597383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.597426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:41.823 [2024-10-16 20:25:56.597436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.033 ms 00:18:41.823 [2024-10-16 20:25:56.597443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.622369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.823 [2024-10-16 20:25:56.622413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:41.823 [2024-10-16 20:25:56.622423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.843 ms 00:18:41.823 [2024-10-16 20:25:56.622430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.823 [2024-10-16 20:25:56.622470] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:41.823 [2024-10-16 20:25:56.622486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:41.823 [2024-10-16 20:25:56.622587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.622993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:41.824 [2024-10-16 20:25:56.623268] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:41.824 [2024-10-16 20:25:56.623277] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d899f785-ee45-4182-91c8-1d95f73f7d3d 00:18:41.824 [2024-10-16 20:25:56.623284] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:41.824 [2024-10-16 20:25:56.623291] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:41.824 [2024-10-16 20:25:56.623298] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:41.824 [2024-10-16 20:25:56.623306] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:41.824 [2024-10-16 20:25:56.623314] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:41.824 [2024-10-16 20:25:56.623322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:41.825 [2024-10-16 20:25:56.623330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:41.825 [2024-10-16 20:25:56.623337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:41.825 [2024-10-16 20:25:56.623350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:41.825 [2024-10-16 20:25:56.623357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.825 [2024-10-16 20:25:56.623365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:41.825 [2024-10-16 20:25:56.623373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:18:41.825 [2024-10-16 20:25:56.623383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.825 [2024-10-16 20:25:56.636963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.825 [2024-10-16 20:25:56.637003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:41.825 [2024-10-16 20:25:56.637015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.547 ms 00:18:41.825 [2024-10-16 20:25:56.637022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.825 [2024-10-16 20:25:56.637263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.825 [2024-10-16 20:25:56.637274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:41.825 [2024-10-16 20:25:56.637289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:18:41.825 [2024-10-16 20:25:56.637297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.825 [2024-10-16 20:25:56.676291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.825 [2024-10-16 20:25:56.676340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:41.825 [2024-10-16 20:25:56.676351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.825 [2024-10-16 20:25:56.676359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.825 [2024-10-16 20:25:56.676423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.825 [2024-10-16 20:25:56.676431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:41.825 [2024-10-16 20:25:56.676446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.825 [2024-10-16 20:25:56.676454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.825 [2024-10-16 20:25:56.676528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.825 [2024-10-16 20:25:56.676539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:41.825 [2024-10-16 20:25:56.676548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.825 [2024-10-16 20:25:56.676556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.825 [2024-10-16 20:25:56.676571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.825 [2024-10-16 20:25:56.676578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:41.825 [2024-10-16 20:25:56.676586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.825 [2024-10-16 20:25:56.676596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.086 [2024-10-16 20:25:56.757092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.086 [2024-10-16 20:25:56.757146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:42.086 [2024-10-16 20:25:56.757158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.086 [2024-10-16 20:25:56.757168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.086 [2024-10-16 20:25:56.789562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.086 [2024-10-16 20:25:56.789611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:42.086 [2024-10-16 20:25:56.789623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.086 [2024-10-16 20:25:56.789638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.086 [2024-10-16 20:25:56.789706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.086 [2024-10-16 20:25:56.789715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:42.086 [2024-10-16 20:25:56.789724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.086 [2024-10-16 20:25:56.789732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.086 [2024-10-16 20:25:56.789774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.086 [2024-10-16 20:25:56.789783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:42.086 [2024-10-16 20:25:56.789793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.086 [2024-10-16 20:25:56.789800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.086 [2024-10-16 20:25:56.789905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.086 [2024-10-16 20:25:56.789915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:42.086 [2024-10-16 20:25:56.789923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.086 [2024-10-16 20:25:56.789930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.086 [2024-10-16 20:25:56.789962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.086 [2024-10-16 20:25:56.789971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:42.086 [2024-10-16 20:25:56.789980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.086 [2024-10-16 20:25:56.789987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.086 [2024-10-16 20:25:56.790030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.086 [2024-10-16 20:25:56.790039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:42.086 [2024-10-16 20:25:56.790082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.086 [2024-10-16 20:25:56.790089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.086 [2024-10-16 20:25:56.790136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.086 [2024-10-16 20:25:56.790146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:42.086 [2024-10-16 20:25:56.790154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.086 [2024-10-16 20:25:56.790162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.086 [2024-10-16 20:25:56.790292] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.548 ms, result 0 00:18:43.472 00:18:43.472 00:18:43.472 20:25:58 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:43.472 [2024-10-16 20:25:58.164515] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:43.472 [2024-10-16 20:25:58.164902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73701 ] 00:18:43.472 [2024-10-16 20:25:58.317713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.734 [2024-10-16 20:25:58.544077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.995 [2024-10-16 20:25:58.822778] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.995 [2024-10-16 20:25:58.822851] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:44.257 [2024-10-16 20:25:58.978405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:58.978458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:44.257 [2024-10-16 20:25:58.978473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:44.257 [2024-10-16 20:25:58.978484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:58.978538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:58.978548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.257 [2024-10-16 20:25:58.978557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:44.257 [2024-10-16 20:25:58.978565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:58.978585] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:44.257 [2024-10-16 20:25:58.979355] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:44.257 [2024-10-16 20:25:58.979380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:58.979389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.257 [2024-10-16 20:25:58.979397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:18:44.257 [2024-10-16 20:25:58.979405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:58.981094] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:44.257 [2024-10-16 20:25:58.995410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:58.995454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:44.257 [2024-10-16 20:25:58.995466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.318 ms 00:18:44.257 [2024-10-16 20:25:58.995474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:58.995547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:58.995556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:44.257 [2024-10-16 20:25:58.995566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:44.257 [2024-10-16 20:25:58.995573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:59.003523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:59.003559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.257 [2024-10-16 20:25:59.003569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.874 ms 00:18:44.257 [2024-10-16 20:25:59.003577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:59.003671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:59.003680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.257 [2024-10-16 20:25:59.003689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:44.257 [2024-10-16 20:25:59.003697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:59.003743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:59.003753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:44.257 [2024-10-16 20:25:59.003762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:44.257 [2024-10-16 20:25:59.003770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:59.003799] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:44.257 [2024-10-16 20:25:59.008025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:59.008067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.257 [2024-10-16 20:25:59.008077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.237 ms 00:18:44.257 [2024-10-16 20:25:59.008085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:59.008126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:59.008135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:44.257 [2024-10-16 20:25:59.008144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:44.257 [2024-10-16 20:25:59.008154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:59.008203] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:44.257 [2024-10-16 20:25:59.008224] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:44.257 [2024-10-16 20:25:59.008258] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:44.257 [2024-10-16 20:25:59.008274] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:44.257 [2024-10-16 20:25:59.008349] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:44.257 [2024-10-16 20:25:59.008360] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:44.257 [2024-10-16 20:25:59.008373] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:44.257 [2024-10-16 20:25:59.008384] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:44.257 [2024-10-16 20:25:59.008393] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:44.257 [2024-10-16 20:25:59.008401] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:44.257 [2024-10-16 20:25:59.008408] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:44.257 [2024-10-16 20:25:59.008416] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:44.257 [2024-10-16 20:25:59.008423] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:44.257 [2024-10-16 20:25:59.008431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:59.008439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:44.257 [2024-10-16 20:25:59.008447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:18:44.257 [2024-10-16 20:25:59.008455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:59.008518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.257 [2024-10-16 20:25:59.008527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:44.257 [2024-10-16 20:25:59.008535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:44.257 [2024-10-16 20:25:59.008542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.257 [2024-10-16 20:25:59.008613] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:44.257 [2024-10-16 20:25:59.008623] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:44.257 [2024-10-16 20:25:59.008631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:44.257 [2024-10-16 20:25:59.008638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.257 [2024-10-16 20:25:59.008646] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:44.257 [2024-10-16 20:25:59.008653] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:44.257 [2024-10-16 20:25:59.008659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:44.257 [2024-10-16 20:25:59.008667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:44.257 [2024-10-16 20:25:59.008674] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:44.257 [2024-10-16 20:25:59.008681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:44.257 [2024-10-16 20:25:59.008688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:44.257 [2024-10-16 20:25:59.008695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:44.257 [2024-10-16 20:25:59.008701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:44.257 [2024-10-16 20:25:59.008708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:44.257 [2024-10-16 20:25:59.008714] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:44.257 [2024-10-16 20:25:59.008721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.257 [2024-10-16 20:25:59.008736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:44.257 [2024-10-16 20:25:59.008742] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:44.257 [2024-10-16 20:25:59.008750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.257 [2024-10-16 20:25:59.008758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:44.257 [2024-10-16 20:25:59.008765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:44.257 [2024-10-16 20:25:59.008772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:44.257 [2024-10-16 20:25:59.008779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:44.257 [2024-10-16 20:25:59.008786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:44.257 [2024-10-16 20:25:59.008793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:44.257 [2024-10-16 20:25:59.008799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:44.257 [2024-10-16 20:25:59.008806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:44.257 [2024-10-16 20:25:59.008812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:44.257 [2024-10-16 20:25:59.008819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:44.257 [2024-10-16 20:25:59.008825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:44.257 [2024-10-16 20:25:59.008831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:44.257 [2024-10-16 20:25:59.008837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:44.257 [2024-10-16 20:25:59.008844] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:44.257 [2024-10-16 20:25:59.008851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:44.257 [2024-10-16 20:25:59.008857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:44.257 [2024-10-16 20:25:59.008864] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:44.257 [2024-10-16 20:25:59.008870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:44.257 [2024-10-16 20:25:59.008876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:44.257 [2024-10-16 20:25:59.008884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:44.257 [2024-10-16 20:25:59.008891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:44.257 [2024-10-16 20:25:59.008897] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:44.257 [2024-10-16 20:25:59.008907] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:44.257 [2024-10-16 20:25:59.008914] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:44.258 [2024-10-16 20:25:59.008923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.258 [2024-10-16 20:25:59.008931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:44.258 [2024-10-16 20:25:59.008937] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:44.258 [2024-10-16 20:25:59.008944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:44.258 [2024-10-16 20:25:59.008950] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:44.258 [2024-10-16 20:25:59.008957] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:44.258 [2024-10-16 20:25:59.008964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:44.258 [2024-10-16 20:25:59.008973] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:44.258 [2024-10-16 20:25:59.008982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:44.258 [2024-10-16 20:25:59.008991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:44.258 [2024-10-16 20:25:59.008999] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:44.258 [2024-10-16 20:25:59.009006] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:44.258 [2024-10-16 20:25:59.009013] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:44.258 [2024-10-16 20:25:59.009020] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:44.258 [2024-10-16 20:25:59.009027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:44.258 [2024-10-16 20:25:59.009035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:44.258 [2024-10-16 20:25:59.009058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:44.258 [2024-10-16 20:25:59.009066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:44.258 [2024-10-16 20:25:59.009073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:44.258 [2024-10-16 20:25:59.009081] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:44.258 [2024-10-16 20:25:59.009088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:44.258 [2024-10-16 20:25:59.009097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:44.258 [2024-10-16 20:25:59.009104] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:44.258 [2024-10-16 20:25:59.009112] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:44.258 [2024-10-16 20:25:59.009120] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:44.258 [2024-10-16 20:25:59.009128] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:44.258 [2024-10-16 20:25:59.009135] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:44.258 [2024-10-16 20:25:59.009144] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:44.258 [2024-10-16 20:25:59.009153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.009161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:44.258 [2024-10-16 20:25:59.009169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:18:44.258 [2024-10-16 20:25:59.009177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.027202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.027242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.258 [2024-10-16 20:25:59.027254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.982 ms 00:18:44.258 [2024-10-16 20:25:59.027268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.027363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.027372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:44.258 [2024-10-16 20:25:59.027382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:44.258 [2024-10-16 20:25:59.027391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.071298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.071342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.258 [2024-10-16 20:25:59.071354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.856 ms 00:18:44.258 [2024-10-16 20:25:59.071362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.071412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.071422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.258 [2024-10-16 20:25:59.071432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:44.258 [2024-10-16 20:25:59.071439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.071975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.071996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.258 [2024-10-16 20:25:59.072006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:18:44.258 [2024-10-16 20:25:59.072020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.072170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.072181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.258 [2024-10-16 20:25:59.072191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:44.258 [2024-10-16 20:25:59.072198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.088517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.088567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.258 [2024-10-16 20:25:59.088578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.294 ms 00:18:44.258 [2024-10-16 20:25:59.088587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.102857] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:44.258 [2024-10-16 20:25:59.102898] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:44.258 [2024-10-16 20:25:59.102911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.102919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:44.258 [2024-10-16 20:25:59.102929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.216 ms 00:18:44.258 [2024-10-16 20:25:59.102937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.129765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.129808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:44.258 [2024-10-16 20:25:59.129820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.776 ms 00:18:44.258 [2024-10-16 20:25:59.129829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.142998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.143036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:44.258 [2024-10-16 20:25:59.143057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.118 ms 00:18:44.258 [2024-10-16 20:25:59.143065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.155624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.155668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:44.258 [2024-10-16 20:25:59.155679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.513 ms 00:18:44.258 [2024-10-16 20:25:59.155686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.258 [2024-10-16 20:25:59.156088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.258 [2024-10-16 20:25:59.156108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:44.258 [2024-10-16 20:25:59.156119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:44.258 [2024-10-16 20:25:59.156127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.520 [2024-10-16 20:25:59.222624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.520 [2024-10-16 20:25:59.222672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:44.520 [2024-10-16 20:25:59.222686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.479 ms 00:18:44.520 [2024-10-16 20:25:59.222694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.520 [2024-10-16 20:25:59.234140] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:44.520 [2024-10-16 20:25:59.237071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.520 [2024-10-16 20:25:59.237106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:44.520 [2024-10-16 20:25:59.237117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.319 ms 00:18:44.520 [2024-10-16 20:25:59.237130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.520 [2024-10-16 20:25:59.237204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.520 [2024-10-16 20:25:59.237215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:44.520 [2024-10-16 20:25:59.237224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:44.520 [2024-10-16 20:25:59.237232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.520 [2024-10-16 20:25:59.237299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.520 [2024-10-16 20:25:59.237309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:44.520 [2024-10-16 20:25:59.237317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:44.520 [2024-10-16 20:25:59.237326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.520 [2024-10-16 20:25:59.238671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.520 [2024-10-16 20:25:59.238708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:44.520 [2024-10-16 20:25:59.238718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:18:44.520 [2024-10-16 20:25:59.238726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.520 [2024-10-16 20:25:59.238761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.520 [2024-10-16 20:25:59.238770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.520 [2024-10-16 20:25:59.238784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:44.520 [2024-10-16 20:25:59.238792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.520 [2024-10-16 20:25:59.238828] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:44.520 [2024-10-16 20:25:59.238838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.520 [2024-10-16 20:25:59.238849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:44.520 [2024-10-16 20:25:59.238858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:44.520 [2024-10-16 20:25:59.238865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.520 [2024-10-16 20:25:59.265088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.520 [2024-10-16 20:25:59.265129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.520 [2024-10-16 20:25:59.265141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.202 ms 00:18:44.520 [2024-10-16 20:25:59.265150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.520 [2024-10-16 20:25:59.265239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.520 [2024-10-16 20:25:59.265249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.520 [2024-10-16 20:25:59.265258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:44.520 [2024-10-16 20:25:59.265266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.520 [2024-10-16 20:25:59.266489] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.586 ms, result 0 00:18:45.906  [2024-10-16T20:26:01.778Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-16T20:26:02.720Z] Copying: 46/1024 [MB] (19 MBps) [2024-10-16T20:26:03.665Z] Copying: 64/1024 [MB] (18 MBps) [2024-10-16T20:26:04.610Z] Copying: 82/1024 [MB] (17 MBps) [2024-10-16T20:26:05.555Z] Copying: 104/1024 [MB] (22 MBps) [2024-10-16T20:26:06.500Z] Copying: 124/1024 [MB] (19 MBps) [2024-10-16T20:26:07.888Z] Copying: 142/1024 [MB] (17 MBps) [2024-10-16T20:26:08.462Z] Copying: 167/1024 [MB] (25 MBps) [2024-10-16T20:26:09.850Z] Copying: 189/1024 [MB] (21 MBps) [2024-10-16T20:26:10.794Z] Copying: 211/1024 [MB] (22 MBps) [2024-10-16T20:26:11.739Z] Copying: 226/1024 [MB] (14 MBps) [2024-10-16T20:26:12.687Z] Copying: 239/1024 [MB] (12 MBps) [2024-10-16T20:26:13.666Z] Copying: 249/1024 [MB] (10 MBps) [2024-10-16T20:26:14.608Z] Copying: 260/1024 [MB] (10 MBps) [2024-10-16T20:26:15.552Z] Copying: 287/1024 [MB] (27 MBps) [2024-10-16T20:26:16.496Z] Copying: 304/1024 [MB] (16 MBps) [2024-10-16T20:26:17.882Z] Copying: 317/1024 [MB] (12 MBps) [2024-10-16T20:26:18.456Z] Copying: 328/1024 [MB] (11 MBps) [2024-10-16T20:26:19.842Z] Copying: 339/1024 [MB] (10 MBps) [2024-10-16T20:26:20.786Z] Copying: 353/1024 [MB] (13 MBps) [2024-10-16T20:26:21.729Z] Copying: 368/1024 [MB] (14 MBps) [2024-10-16T20:26:22.673Z] Copying: 388/1024 [MB] (19 MBps) [2024-10-16T20:26:23.617Z] Copying: 411/1024 [MB] (23 MBps) [2024-10-16T20:26:24.561Z] Copying: 431/1024 [MB] (19 MBps) [2024-10-16T20:26:25.506Z] Copying: 447/1024 [MB] (16 MBps) [2024-10-16T20:26:26.448Z] Copying: 462/1024 [MB] (14 MBps) [2024-10-16T20:26:27.833Z] Copying: 475/1024 [MB] (13 MBps) [2024-10-16T20:26:28.778Z] Copying: 486/1024 [MB] (10 MBps) [2024-10-16T20:26:29.723Z] Copying: 496/1024 [MB] (10 MBps) [2024-10-16T20:26:30.706Z] Copying: 507/1024 [MB] (10 MBps) [2024-10-16T20:26:31.651Z] Copying: 521/1024 [MB] (13 MBps) [2024-10-16T20:26:32.594Z] Copying: 536/1024 [MB] (14 MBps) [2024-10-16T20:26:33.537Z] Copying: 551/1024 [MB] (15 MBps) [2024-10-16T20:26:34.479Z] Copying: 574/1024 [MB] (22 MBps) [2024-10-16T20:26:35.863Z] Copying: 587/1024 [MB] (13 MBps) [2024-10-16T20:26:36.808Z] Copying: 606/1024 [MB] (19 MBps) [2024-10-16T20:26:37.755Z] Copying: 621/1024 [MB] (14 MBps) [2024-10-16T20:26:38.699Z] Copying: 635/1024 [MB] (13 MBps) [2024-10-16T20:26:39.644Z] Copying: 646/1024 [MB] (10 MBps) [2024-10-16T20:26:40.589Z] Copying: 662/1024 [MB] (16 MBps) [2024-10-16T20:26:41.533Z] Copying: 679/1024 [MB] (16 MBps) [2024-10-16T20:26:42.478Z] Copying: 699/1024 [MB] (20 MBps) [2024-10-16T20:26:43.865Z] Copying: 720/1024 [MB] (20 MBps) [2024-10-16T20:26:44.809Z] Copying: 739/1024 [MB] (19 MBps) [2024-10-16T20:26:45.752Z] Copying: 760/1024 [MB] (20 MBps) [2024-10-16T20:26:46.697Z] Copying: 775/1024 [MB] (15 MBps) [2024-10-16T20:26:47.663Z] Copying: 790/1024 [MB] (15 MBps) [2024-10-16T20:26:48.608Z] Copying: 810/1024 [MB] (20 MBps) [2024-10-16T20:26:49.551Z] Copying: 831/1024 [MB] (20 MBps) [2024-10-16T20:26:50.496Z] Copying: 844/1024 [MB] (12 MBps) [2024-10-16T20:26:51.882Z] Copying: 866/1024 [MB] (21 MBps) [2024-10-16T20:26:52.455Z] Copying: 886/1024 [MB] (20 MBps) [2024-10-16T20:26:53.842Z] Copying: 904/1024 [MB] (18 MBps) [2024-10-16T20:26:54.786Z] Copying: 917/1024 [MB] (13 MBps) [2024-10-16T20:26:55.730Z] Copying: 933/1024 [MB] (15 MBps) [2024-10-16T20:26:56.675Z] Copying: 944/1024 [MB] (11 MBps) [2024-10-16T20:26:57.618Z] Copying: 967/1024 [MB] (22 MBps) [2024-10-16T20:26:58.561Z] Copying: 992/1024 [MB] (25 MBps) [2024-10-16T20:26:59.134Z] Copying: 1014/1024 [MB] (21 MBps) [2024-10-16T20:26:59.707Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-16 20:26:59.397901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.398307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:44.779 [2024-10-16 20:26:59.398336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.779 [2024-10-16 20:26:59.398346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.398383] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:44.779 [2024-10-16 20:26:59.401359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.401574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:44.779 [2024-10-16 20:26:59.401598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:19:44.779 [2024-10-16 20:26:59.401607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.401892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.401906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:44.779 [2024-10-16 20:26:59.401918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:44.779 [2024-10-16 20:26:59.401926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.405416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.405446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:44.779 [2024-10-16 20:26:59.405463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.474 ms 00:19:44.779 [2024-10-16 20:26:59.405471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.412405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.412613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:44.779 [2024-10-16 20:26:59.412636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.913 ms 00:19:44.779 [2024-10-16 20:26:59.412645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.443075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.443268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:44.779 [2024-10-16 20:26:59.443290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.342 ms 00:19:44.779 [2024-10-16 20:26:59.443299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.460198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.460248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:44.779 [2024-10-16 20:26:59.460262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.771 ms 00:19:44.779 [2024-10-16 20:26:59.460279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.460452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.460467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:44.779 [2024-10-16 20:26:59.460477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:19:44.779 [2024-10-16 20:26:59.460486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.487625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.487674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:44.779 [2024-10-16 20:26:59.487686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.121 ms 00:19:44.779 [2024-10-16 20:26:59.487694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.514259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.514464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:44.779 [2024-10-16 20:26:59.514501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.515 ms 00:19:44.779 [2024-10-16 20:26:59.514509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.540635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.540686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:44.779 [2024-10-16 20:26:59.540698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.056 ms 00:19:44.779 [2024-10-16 20:26:59.540705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.566676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.779 [2024-10-16 20:26:59.566725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:44.779 [2024-10-16 20:26:59.566736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.861 ms 00:19:44.779 [2024-10-16 20:26:59.566743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.779 [2024-10-16 20:26:59.566793] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:44.779 [2024-10-16 20:26:59.566818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.566999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:44.779 [2024-10-16 20:26:59.567127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:44.780 [2024-10-16 20:26:59.567679] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:44.780 [2024-10-16 20:26:59.567688] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d899f785-ee45-4182-91c8-1d95f73f7d3d 00:19:44.780 [2024-10-16 20:26:59.567699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:44.780 [2024-10-16 20:26:59.567708] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:44.780 [2024-10-16 20:26:59.567716] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:44.780 [2024-10-16 20:26:59.567724] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:44.780 [2024-10-16 20:26:59.567732] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:44.780 [2024-10-16 20:26:59.567740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:44.780 [2024-10-16 20:26:59.567747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:44.780 [2024-10-16 20:26:59.567762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:44.780 [2024-10-16 20:26:59.567769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:44.780 [2024-10-16 20:26:59.567777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.780 [2024-10-16 20:26:59.567785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:44.780 [2024-10-16 20:26:59.567797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:19:44.780 [2024-10-16 20:26:59.567805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.780 [2024-10-16 20:26:59.581310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.780 [2024-10-16 20:26:59.581353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:44.780 [2024-10-16 20:26:59.581366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.455 ms 00:19:44.780 [2024-10-16 20:26:59.581375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.780 [2024-10-16 20:26:59.581593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.780 [2024-10-16 20:26:59.581612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:44.780 [2024-10-16 20:26:59.581621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:19:44.780 [2024-10-16 20:26:59.581630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.781 [2024-10-16 20:26:59.621661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.781 [2024-10-16 20:26:59.621713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:44.781 [2024-10-16 20:26:59.621725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.781 [2024-10-16 20:26:59.621734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.781 [2024-10-16 20:26:59.621805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.781 [2024-10-16 20:26:59.621821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:44.781 [2024-10-16 20:26:59.621830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.781 [2024-10-16 20:26:59.621838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.781 [2024-10-16 20:26:59.621918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.781 [2024-10-16 20:26:59.621932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:44.781 [2024-10-16 20:26:59.621940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.781 [2024-10-16 20:26:59.621949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.781 [2024-10-16 20:26:59.621966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.781 [2024-10-16 20:26:59.621974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:44.781 [2024-10-16 20:26:59.621987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.781 [2024-10-16 20:26:59.621995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.042 [2024-10-16 20:26:59.708331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.042 [2024-10-16 20:26:59.708548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:45.042 [2024-10-16 20:26:59.708573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.042 [2024-10-16 20:26:59.708583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.042 [2024-10-16 20:26:59.741062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.042 [2024-10-16 20:26:59.741106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:45.042 [2024-10-16 20:26:59.741124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.042 [2024-10-16 20:26:59.741132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.042 [2024-10-16 20:26:59.741225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.042 [2024-10-16 20:26:59.741237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.042 [2024-10-16 20:26:59.741245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.042 [2024-10-16 20:26:59.741253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.042 [2024-10-16 20:26:59.741295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.042 [2024-10-16 20:26:59.741306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.042 [2024-10-16 20:26:59.741315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.042 [2024-10-16 20:26:59.741327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.042 [2024-10-16 20:26:59.741427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.042 [2024-10-16 20:26:59.741440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.042 [2024-10-16 20:26:59.741448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.042 [2024-10-16 20:26:59.741456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.042 [2024-10-16 20:26:59.741494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.042 [2024-10-16 20:26:59.741505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:45.042 [2024-10-16 20:26:59.741513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.042 [2024-10-16 20:26:59.741521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.042 [2024-10-16 20:26:59.741568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.042 [2024-10-16 20:26:59.741579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.042 [2024-10-16 20:26:59.741590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.042 [2024-10-16 20:26:59.741597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.042 [2024-10-16 20:26:59.741641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.042 [2024-10-16 20:26:59.741651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.042 [2024-10-16 20:26:59.741660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.042 [2024-10-16 20:26:59.741670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.042 [2024-10-16 20:26:59.741800] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.872 ms, result 0 00:19:45.985 00:19:45.985 00:19:45.985 20:27:00 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:47.902 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:47.902 20:27:02 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:48.163 [2024-10-16 20:27:02.831921] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:48.163 [2024-10-16 20:27:02.832034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74374 ] 00:19:48.163 [2024-10-16 20:27:02.982257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.425 [2024-10-16 20:27:03.221661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.685 [2024-10-16 20:27:03.509333] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:48.685 [2024-10-16 20:27:03.509418] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:48.948 [2024-10-16 20:27:03.664416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.664481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:48.948 [2024-10-16 20:27:03.664497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:48.948 [2024-10-16 20:27:03.664508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.664565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.664576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.948 [2024-10-16 20:27:03.664585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:48.948 [2024-10-16 20:27:03.664593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.664613] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:48.948 [2024-10-16 20:27:03.665791] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:48.948 [2024-10-16 20:27:03.665860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.665871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.948 [2024-10-16 20:27:03.665883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:19:48.948 [2024-10-16 20:27:03.665891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.667691] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:48.948 [2024-10-16 20:27:03.682321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.682372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:48.948 [2024-10-16 20:27:03.682386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.631 ms 00:19:48.948 [2024-10-16 20:27:03.682395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.682479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.682490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:48.948 [2024-10-16 20:27:03.682499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:48.948 [2024-10-16 20:27:03.682506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.691533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.691579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.948 [2024-10-16 20:27:03.691589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.945 ms 00:19:48.948 [2024-10-16 20:27:03.691598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.691701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.691712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.948 [2024-10-16 20:27:03.691722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:48.948 [2024-10-16 20:27:03.691732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.691780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.691790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:48.948 [2024-10-16 20:27:03.691799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:48.948 [2024-10-16 20:27:03.691807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.691839] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:48.948 [2024-10-16 20:27:03.696109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.696150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.948 [2024-10-16 20:27:03.696161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.283 ms 00:19:48.948 [2024-10-16 20:27:03.696170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.696211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.696219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:48.948 [2024-10-16 20:27:03.696229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:48.948 [2024-10-16 20:27:03.696240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.696293] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:48.948 [2024-10-16 20:27:03.696317] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:48.948 [2024-10-16 20:27:03.696352] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:48.948 [2024-10-16 20:27:03.696369] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:48.948 [2024-10-16 20:27:03.696444] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:48.948 [2024-10-16 20:27:03.696457] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:48.948 [2024-10-16 20:27:03.696473] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:48.948 [2024-10-16 20:27:03.696484] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:48.948 [2024-10-16 20:27:03.696493] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:48.948 [2024-10-16 20:27:03.696502] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:48.948 [2024-10-16 20:27:03.696510] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:48.948 [2024-10-16 20:27:03.696520] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:48.948 [2024-10-16 20:27:03.696528] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:48.948 [2024-10-16 20:27:03.696537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.696545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:48.948 [2024-10-16 20:27:03.696555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:48.948 [2024-10-16 20:27:03.696563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.696626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.948 [2024-10-16 20:27:03.696636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:48.948 [2024-10-16 20:27:03.696644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:48.948 [2024-10-16 20:27:03.696651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.948 [2024-10-16 20:27:03.696724] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:48.948 [2024-10-16 20:27:03.696736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:48.948 [2024-10-16 20:27:03.696745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.948 [2024-10-16 20:27:03.696754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.948 [2024-10-16 20:27:03.696763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:48.948 [2024-10-16 20:27:03.696771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:48.948 [2024-10-16 20:27:03.696778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:48.948 [2024-10-16 20:27:03.696788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:48.948 [2024-10-16 20:27:03.696796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:48.948 [2024-10-16 20:27:03.696804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.948 [2024-10-16 20:27:03.696811] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:48.948 [2024-10-16 20:27:03.696820] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:48.948 [2024-10-16 20:27:03.696827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.948 [2024-10-16 20:27:03.696834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:48.948 [2024-10-16 20:27:03.696843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:48.948 [2024-10-16 20:27:03.696851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.948 [2024-10-16 20:27:03.696864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:48.948 [2024-10-16 20:27:03.696871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:48.948 [2024-10-16 20:27:03.696879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.948 [2024-10-16 20:27:03.696887] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:48.948 [2024-10-16 20:27:03.696893] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:48.948 [2024-10-16 20:27:03.696900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:48.948 [2024-10-16 20:27:03.696907] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:48.948 [2024-10-16 20:27:03.696915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:48.948 [2024-10-16 20:27:03.696922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:48.948 [2024-10-16 20:27:03.696929] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:48.948 [2024-10-16 20:27:03.696936] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:48.948 [2024-10-16 20:27:03.696942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:48.948 [2024-10-16 20:27:03.696948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:48.948 [2024-10-16 20:27:03.696955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:48.948 [2024-10-16 20:27:03.696961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:48.948 [2024-10-16 20:27:03.696968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:48.948 [2024-10-16 20:27:03.696976] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:48.948 [2024-10-16 20:27:03.696982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:48.948 [2024-10-16 20:27:03.696988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:48.948 [2024-10-16 20:27:03.696995] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:48.948 [2024-10-16 20:27:03.697001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.949 [2024-10-16 20:27:03.697007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:48.949 [2024-10-16 20:27:03.697015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:48.949 [2024-10-16 20:27:03.697023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.949 [2024-10-16 20:27:03.697029] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:48.949 [2024-10-16 20:27:03.697038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:48.949 [2024-10-16 20:27:03.697068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.949 [2024-10-16 20:27:03.697079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.949 [2024-10-16 20:27:03.697088] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:48.949 [2024-10-16 20:27:03.697096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:48.949 [2024-10-16 20:27:03.697105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:48.949 [2024-10-16 20:27:03.697113] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:48.949 [2024-10-16 20:27:03.697120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:48.949 [2024-10-16 20:27:03.697145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:48.949 [2024-10-16 20:27:03.697154] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:48.949 [2024-10-16 20:27:03.697164] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.949 [2024-10-16 20:27:03.697175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:48.949 [2024-10-16 20:27:03.697183] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:48.949 [2024-10-16 20:27:03.697204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:48.949 [2024-10-16 20:27:03.697212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:48.949 [2024-10-16 20:27:03.697231] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:48.949 [2024-10-16 20:27:03.697239] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:48.949 [2024-10-16 20:27:03.697246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:48.949 [2024-10-16 20:27:03.697253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:48.949 [2024-10-16 20:27:03.697260] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:48.949 [2024-10-16 20:27:03.697267] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:48.949 [2024-10-16 20:27:03.697276] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:48.949 [2024-10-16 20:27:03.697284] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:48.949 [2024-10-16 20:27:03.697292] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:48.949 [2024-10-16 20:27:03.697298] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:48.949 [2024-10-16 20:27:03.697306] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.949 [2024-10-16 20:27:03.697316] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:48.949 [2024-10-16 20:27:03.697323] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:48.949 [2024-10-16 20:27:03.697331] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:48.949 [2024-10-16 20:27:03.697338] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:48.949 [2024-10-16 20:27:03.697346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.697353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:48.949 [2024-10-16 20:27:03.697360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:19:48.949 [2024-10-16 20:27:03.697369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.718772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.718832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.949 [2024-10-16 20:27:03.718846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.357 ms 00:19:48.949 [2024-10-16 20:27:03.718861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.718960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.718970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:48.949 [2024-10-16 20:27:03.718979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:48.949 [2024-10-16 20:27:03.718987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.763906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.763964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.949 [2024-10-16 20:27:03.763977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.823 ms 00:19:48.949 [2024-10-16 20:27:03.763985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.764038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.764072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.949 [2024-10-16 20:27:03.764082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:48.949 [2024-10-16 20:27:03.764090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.764698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.764747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.949 [2024-10-16 20:27:03.764758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:19:48.949 [2024-10-16 20:27:03.764774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.764913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.764924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.949 [2024-10-16 20:27:03.764933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:48.949 [2024-10-16 20:27:03.764941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.781967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.782019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.949 [2024-10-16 20:27:03.782031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.004 ms 00:19:48.949 [2024-10-16 20:27:03.782040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.796753] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:48.949 [2024-10-16 20:27:03.796977] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:48.949 [2024-10-16 20:27:03.796999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.797008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:48.949 [2024-10-16 20:27:03.797019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.818 ms 00:19:48.949 [2024-10-16 20:27:03.797027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.824119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.824173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:48.949 [2024-10-16 20:27:03.824186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.950 ms 00:19:48.949 [2024-10-16 20:27:03.824194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.837932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.837982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:48.949 [2024-10-16 20:27:03.837995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.673 ms 00:19:48.949 [2024-10-16 20:27:03.838003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.851228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.851428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:48.949 [2024-10-16 20:27:03.851450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.151 ms 00:19:48.949 [2024-10-16 20:27:03.851458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.949 [2024-10-16 20:27:03.851853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.949 [2024-10-16 20:27:03.851868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:48.949 [2024-10-16 20:27:03.851879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:19:48.949 [2024-10-16 20:27:03.851886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.218 [2024-10-16 20:27:03.921127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.218 [2024-10-16 20:27:03.921210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:49.218 [2024-10-16 20:27:03.921229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.221 ms 00:19:49.218 [2024-10-16 20:27:03.921238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.218 [2024-10-16 20:27:03.932835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:49.218 [2024-10-16 20:27:03.936092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.218 [2024-10-16 20:27:03.936135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:49.218 [2024-10-16 20:27:03.936149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.791 ms 00:19:49.218 [2024-10-16 20:27:03.936164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.218 [2024-10-16 20:27:03.936240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.218 [2024-10-16 20:27:03.936252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:49.218 [2024-10-16 20:27:03.936263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:49.218 [2024-10-16 20:27:03.936273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.218 [2024-10-16 20:27:03.936340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.218 [2024-10-16 20:27:03.936351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:49.218 [2024-10-16 20:27:03.936360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:49.218 [2024-10-16 20:27:03.936373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.218 [2024-10-16 20:27:03.937756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.218 [2024-10-16 20:27:03.937804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:49.218 [2024-10-16 20:27:03.937815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:19:49.218 [2024-10-16 20:27:03.937823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.218 [2024-10-16 20:27:03.937861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.218 [2024-10-16 20:27:03.937869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:49.218 [2024-10-16 20:27:03.937884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:49.218 [2024-10-16 20:27:03.937892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.218 [2024-10-16 20:27:03.937931] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:49.218 [2024-10-16 20:27:03.937942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.218 [2024-10-16 20:27:03.937954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:49.218 [2024-10-16 20:27:03.937963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:49.218 [2024-10-16 20:27:03.937970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.218 [2024-10-16 20:27:03.964505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.218 [2024-10-16 20:27:03.964555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:49.218 [2024-10-16 20:27:03.964570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.514 ms 00:19:49.218 [2024-10-16 20:27:03.964578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.218 [2024-10-16 20:27:03.964671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.218 [2024-10-16 20:27:03.964681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:49.218 [2024-10-16 20:27:03.964692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:49.218 [2024-10-16 20:27:03.964701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.218 [2024-10-16 20:27:03.966008] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.115 ms, result 0 00:19:50.228  [2024-10-16T20:27:06.101Z] Copying: 18/1024 [MB] (18 MBps) [2024-10-16T20:27:07.046Z] Copying: 38/1024 [MB] (19 MBps) [2024-10-16T20:27:07.988Z] Copying: 58/1024 [MB] (20 MBps) [2024-10-16T20:27:09.376Z] Copying: 83/1024 [MB] (24 MBps) [2024-10-16T20:27:10.316Z] Copying: 109/1024 [MB] (25 MBps) [2024-10-16T20:27:11.259Z] Copying: 128/1024 [MB] (18 MBps) [2024-10-16T20:27:12.203Z] Copying: 147/1024 [MB] (19 MBps) [2024-10-16T20:27:13.144Z] Copying: 169/1024 [MB] (21 MBps) [2024-10-16T20:27:14.088Z] Copying: 196/1024 [MB] (27 MBps) [2024-10-16T20:27:15.031Z] Copying: 228/1024 [MB] (31 MBps) [2024-10-16T20:27:16.419Z] Copying: 246/1024 [MB] (18 MBps) [2024-10-16T20:27:16.992Z] Copying: 278/1024 [MB] (32 MBps) [2024-10-16T20:27:18.393Z] Copying: 302/1024 [MB] (24 MBps) [2024-10-16T20:27:19.339Z] Copying: 315/1024 [MB] (12 MBps) [2024-10-16T20:27:20.282Z] Copying: 345/1024 [MB] (29 MBps) [2024-10-16T20:27:21.300Z] Copying: 375/1024 [MB] (30 MBps) [2024-10-16T20:27:22.244Z] Copying: 395/1024 [MB] (20 MBps) [2024-10-16T20:27:23.190Z] Copying: 413/1024 [MB] (18 MBps) [2024-10-16T20:27:24.134Z] Copying: 431/1024 [MB] (17 MBps) [2024-10-16T20:27:25.077Z] Copying: 442/1024 [MB] (10 MBps) [2024-10-16T20:27:26.021Z] Copying: 464/1024 [MB] (22 MBps) [2024-10-16T20:27:27.410Z] Copying: 482/1024 [MB] (17 MBps) [2024-10-16T20:27:27.983Z] Copying: 502/1024 [MB] (20 MBps) [2024-10-16T20:27:29.371Z] Copying: 514/1024 [MB] (11 MBps) [2024-10-16T20:27:30.318Z] Copying: 532/1024 [MB] (17 MBps) [2024-10-16T20:27:31.263Z] Copying: 545/1024 [MB] (13 MBps) [2024-10-16T20:27:32.207Z] Copying: 561/1024 [MB] (15 MBps) [2024-10-16T20:27:33.151Z] Copying: 571/1024 [MB] (10 MBps) [2024-10-16T20:27:34.097Z] Copying: 594/1024 [MB] (22 MBps) [2024-10-16T20:27:35.041Z] Copying: 605/1024 [MB] (11 MBps) [2024-10-16T20:27:35.984Z] Copying: 619/1024 [MB] (13 MBps) [2024-10-16T20:27:37.370Z] Copying: 639/1024 [MB] (19 MBps) [2024-10-16T20:27:38.329Z] Copying: 670/1024 [MB] (30 MBps) [2024-10-16T20:27:39.292Z] Copying: 698/1024 [MB] (27 MBps) [2024-10-16T20:27:40.236Z] Copying: 729/1024 [MB] (31 MBps) [2024-10-16T20:27:41.180Z] Copying: 758/1024 [MB] (29 MBps) [2024-10-16T20:27:42.125Z] Copying: 774/1024 [MB] (15 MBps) [2024-10-16T20:27:43.068Z] Copying: 790/1024 [MB] (15 MBps) [2024-10-16T20:27:44.012Z] Copying: 813/1024 [MB] (23 MBps) [2024-10-16T20:27:45.400Z] Copying: 840/1024 [MB] (26 MBps) [2024-10-16T20:27:46.342Z] Copying: 854/1024 [MB] (14 MBps) [2024-10-16T20:27:47.285Z] Copying: 885/1024 [MB] (31 MBps) [2024-10-16T20:27:48.229Z] Copying: 915/1024 [MB] (29 MBps) [2024-10-16T20:27:49.174Z] Copying: 931/1024 [MB] (16 MBps) [2024-10-16T20:27:50.118Z] Copying: 953/1024 [MB] (21 MBps) [2024-10-16T20:27:51.062Z] Copying: 972/1024 [MB] (19 MBps) [2024-10-16T20:27:52.004Z] Copying: 1003/1024 [MB] (30 MBps) [2024-10-16T20:27:52.948Z] Copying: 1023/1024 [MB] (19 MBps) [2024-10-16T20:27:52.948Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-10-16 20:27:52.695900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.019 [2024-10-16 20:27:52.696202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:38.019 [2024-10-16 20:27:52.696233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:38.019 [2024-10-16 20:27:52.696243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.019 [2024-10-16 20:27:52.698267] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:38.019 [2024-10-16 20:27:52.702554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.019 [2024-10-16 20:27:52.702608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:38.019 [2024-10-16 20:27:52.702622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.234 ms 00:20:38.019 [2024-10-16 20:27:52.702631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.019 [2024-10-16 20:27:52.716386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.019 [2024-10-16 20:27:52.716433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:38.019 [2024-10-16 20:27:52.716457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.946 ms 00:20:38.019 [2024-10-16 20:27:52.716466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.019 [2024-10-16 20:27:52.738766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.019 [2024-10-16 20:27:52.738954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:38.019 [2024-10-16 20:27:52.738976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.282 ms 00:20:38.019 [2024-10-16 20:27:52.738985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.019 [2024-10-16 20:27:52.745131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.019 [2024-10-16 20:27:52.745308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:38.019 [2024-10-16 20:27:52.745329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.109 ms 00:20:38.019 [2024-10-16 20:27:52.745346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.019 [2024-10-16 20:27:52.772583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.019 [2024-10-16 20:27:52.772785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:38.019 [2024-10-16 20:27:52.772808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.178 ms 00:20:38.019 [2024-10-16 20:27:52.772817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.019 [2024-10-16 20:27:52.789836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.019 [2024-10-16 20:27:52.789886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:38.019 [2024-10-16 20:27:52.789900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.952 ms 00:20:38.019 [2024-10-16 20:27:52.789909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.281 [2024-10-16 20:27:53.042373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.281 [2024-10-16 20:27:53.042424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:38.281 [2024-10-16 20:27:53.042437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 252.409 ms 00:20:38.281 [2024-10-16 20:27:53.042446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.281 [2024-10-16 20:27:53.069408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.281 [2024-10-16 20:27:53.069456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:38.281 [2024-10-16 20:27:53.069469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.937 ms 00:20:38.281 [2024-10-16 20:27:53.069477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.281 [2024-10-16 20:27:53.095383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.281 [2024-10-16 20:27:53.095605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:38.281 [2024-10-16 20:27:53.095641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.858 ms 00:20:38.281 [2024-10-16 20:27:53.095649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.281 [2024-10-16 20:27:53.129649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.281 [2024-10-16 20:27:53.129707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:38.281 [2024-10-16 20:27:53.129722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.736 ms 00:20:38.281 [2024-10-16 20:27:53.129730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.281 [2024-10-16 20:27:53.155599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.281 [2024-10-16 20:27:53.155650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:38.281 [2024-10-16 20:27:53.155663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.756 ms 00:20:38.282 [2024-10-16 20:27:53.155671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.282 [2024-10-16 20:27:53.155720] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:38.282 [2024-10-16 20:27:53.155738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 90880 / 261120 wr_cnt: 1 state: open 00:20:38.282 [2024-10-16 20:27:53.155750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.155997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:38.282 [2024-10-16 20:27:53.156515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:38.283 [2024-10-16 20:27:53.156522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:38.283 [2024-10-16 20:27:53.156530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:38.283 [2024-10-16 20:27:53.156539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:38.283 [2024-10-16 20:27:53.156548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:38.283 [2024-10-16 20:27:53.156559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:38.283 [2024-10-16 20:27:53.156567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:38.283 [2024-10-16 20:27:53.156575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:38.283 [2024-10-16 20:27:53.156583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:38.283 [2024-10-16 20:27:53.156591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:38.283 [2024-10-16 20:27:53.156609] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:38.283 [2024-10-16 20:27:53.156619] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d899f785-ee45-4182-91c8-1d95f73f7d3d 00:20:38.283 [2024-10-16 20:27:53.156627] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 90880 00:20:38.283 [2024-10-16 20:27:53.156642] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 91840 00:20:38.283 [2024-10-16 20:27:53.156649] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 90880 00:20:38.283 [2024-10-16 20:27:53.156663] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0106 00:20:38.283 [2024-10-16 20:27:53.156671] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:38.283 [2024-10-16 20:27:53.156680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:38.283 [2024-10-16 20:27:53.156688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:38.283 [2024-10-16 20:27:53.156701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:38.283 [2024-10-16 20:27:53.156708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:38.283 [2024-10-16 20:27:53.156715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.283 [2024-10-16 20:27:53.156724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:38.283 [2024-10-16 20:27:53.156732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:20:38.283 [2024-10-16 20:27:53.156745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.283 [2024-10-16 20:27:53.170794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.283 [2024-10-16 20:27:53.170849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:38.283 [2024-10-16 20:27:53.170860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.998 ms 00:20:38.283 [2024-10-16 20:27:53.170868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.283 [2024-10-16 20:27:53.171123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.283 [2024-10-16 20:27:53.171136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:38.283 [2024-10-16 20:27:53.171145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:20:38.283 [2024-10-16 20:27:53.171153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.543 [2024-10-16 20:27:53.210337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.543 [2024-10-16 20:27:53.210388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.543 [2024-10-16 20:27:53.210400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.543 [2024-10-16 20:27:53.210410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.543 [2024-10-16 20:27:53.210479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.543 [2024-10-16 20:27:53.210488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.543 [2024-10-16 20:27:53.210497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.543 [2024-10-16 20:27:53.210506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.543 [2024-10-16 20:27:53.210583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.543 [2024-10-16 20:27:53.210603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.543 [2024-10-16 20:27:53.210611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.543 [2024-10-16 20:27:53.210619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.543 [2024-10-16 20:27:53.210637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.543 [2024-10-16 20:27:53.210647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.543 [2024-10-16 20:27:53.210655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.543 [2024-10-16 20:27:53.210665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.543 [2024-10-16 20:27:53.292085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.544 [2024-10-16 20:27:53.292141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.544 [2024-10-16 20:27:53.292153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.544 [2024-10-16 20:27:53.292162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.544 [2024-10-16 20:27:53.325298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.544 [2024-10-16 20:27:53.325573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.544 [2024-10-16 20:27:53.325593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.544 [2024-10-16 20:27:53.325604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.544 [2024-10-16 20:27:53.325677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.544 [2024-10-16 20:27:53.325687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.544 [2024-10-16 20:27:53.325705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.544 [2024-10-16 20:27:53.325713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.544 [2024-10-16 20:27:53.325755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.544 [2024-10-16 20:27:53.325766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.544 [2024-10-16 20:27:53.325775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.544 [2024-10-16 20:27:53.325783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.544 [2024-10-16 20:27:53.325891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.544 [2024-10-16 20:27:53.325903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.544 [2024-10-16 20:27:53.325912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.544 [2024-10-16 20:27:53.325923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.544 [2024-10-16 20:27:53.325955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.544 [2024-10-16 20:27:53.325967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.544 [2024-10-16 20:27:53.325976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.544 [2024-10-16 20:27:53.325983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.544 [2024-10-16 20:27:53.326025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.544 [2024-10-16 20:27:53.326035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.544 [2024-10-16 20:27:53.326082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.544 [2024-10-16 20:27:53.326095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.544 [2024-10-16 20:27:53.326147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.544 [2024-10-16 20:27:53.326160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.544 [2024-10-16 20:27:53.326171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.544 [2024-10-16 20:27:53.326179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.544 [2024-10-16 20:27:53.326311] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 630.952 ms, result 0 00:20:40.465 00:20:40.465 00:20:40.465 20:27:55 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:40.465 [2024-10-16 20:27:55.119640] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:40.465 [2024-10-16 20:27:55.119781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74918 ] 00:20:40.465 [2024-10-16 20:27:55.273944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.749 [2024-10-16 20:27:55.498676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.010 [2024-10-16 20:27:55.788133] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.010 [2024-10-16 20:27:55.788213] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.273 [2024-10-16 20:27:55.944113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.944175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:41.273 [2024-10-16 20:27:55.944191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:41.273 [2024-10-16 20:27:55.944203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.944257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.944268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.273 [2024-10-16 20:27:55.944277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:41.273 [2024-10-16 20:27:55.944286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.944307] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:41.273 [2024-10-16 20:27:55.945378] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:41.273 [2024-10-16 20:27:55.945440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.945451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.273 [2024-10-16 20:27:55.945463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:20:41.273 [2024-10-16 20:27:55.945471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.947370] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:41.273 [2024-10-16 20:27:55.962263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.962316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:41.273 [2024-10-16 20:27:55.962332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.895 ms 00:20:41.273 [2024-10-16 20:27:55.962340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.962426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.962436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:41.273 [2024-10-16 20:27:55.962446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:41.273 [2024-10-16 20:27:55.962453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.971449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.971496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.273 [2024-10-16 20:27:55.971508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.911 ms 00:20:41.273 [2024-10-16 20:27:55.971517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.971620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.971630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.273 [2024-10-16 20:27:55.971639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:41.273 [2024-10-16 20:27:55.971650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.971697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.971707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:41.273 [2024-10-16 20:27:55.971716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:41.273 [2024-10-16 20:27:55.971724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.971756] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.273 [2024-10-16 20:27:55.976177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.976220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.273 [2024-10-16 20:27:55.976231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.435 ms 00:20:41.273 [2024-10-16 20:27:55.976239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.976279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.976287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:41.273 [2024-10-16 20:27:55.976300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:41.273 [2024-10-16 20:27:55.976308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.976364] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:41.273 [2024-10-16 20:27:55.976387] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:41.273 [2024-10-16 20:27:55.976424] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:41.273 [2024-10-16 20:27:55.976442] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:41.273 [2024-10-16 20:27:55.976519] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:41.273 [2024-10-16 20:27:55.976536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:41.273 [2024-10-16 20:27:55.976547] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:41.273 [2024-10-16 20:27:55.976558] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:41.273 [2024-10-16 20:27:55.976568] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:41.273 [2024-10-16 20:27:55.976577] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:41.273 [2024-10-16 20:27:55.976586] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:41.273 [2024-10-16 20:27:55.976593] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:41.273 [2024-10-16 20:27:55.976600] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:41.273 [2024-10-16 20:27:55.976608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.976616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:41.273 [2024-10-16 20:27:55.976624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:20:41.273 [2024-10-16 20:27:55.976636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.976696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.273 [2024-10-16 20:27:55.976706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:41.273 [2024-10-16 20:27:55.976715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:41.273 [2024-10-16 20:27:55.976724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.273 [2024-10-16 20:27:55.976796] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:41.273 [2024-10-16 20:27:55.976808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:41.273 [2024-10-16 20:27:55.976817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.273 [2024-10-16 20:27:55.976825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.273 [2024-10-16 20:27:55.976836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:41.274 [2024-10-16 20:27:55.976843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:41.274 [2024-10-16 20:27:55.976852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:41.274 [2024-10-16 20:27:55.976863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:41.274 [2024-10-16 20:27:55.976871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:41.274 [2024-10-16 20:27:55.976878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.274 [2024-10-16 20:27:55.976885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:41.274 [2024-10-16 20:27:55.976892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:41.274 [2024-10-16 20:27:55.976898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.274 [2024-10-16 20:27:55.976907] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:41.274 [2024-10-16 20:27:55.976916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:41.274 [2024-10-16 20:27:55.976924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.274 [2024-10-16 20:27:55.976940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:41.274 [2024-10-16 20:27:55.976946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:41.274 [2024-10-16 20:27:55.976953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.274 [2024-10-16 20:27:55.976959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:41.274 [2024-10-16 20:27:55.976966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:41.274 [2024-10-16 20:27:55.976972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:41.274 [2024-10-16 20:27:55.976981] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:41.274 [2024-10-16 20:27:55.976987] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:41.274 [2024-10-16 20:27:55.976993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:41.274 [2024-10-16 20:27:55.976999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:41.274 [2024-10-16 20:27:55.977005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:41.274 [2024-10-16 20:27:55.977012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:41.274 [2024-10-16 20:27:55.977019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:41.274 [2024-10-16 20:27:55.977027] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:41.274 [2024-10-16 20:27:55.977032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:41.274 [2024-10-16 20:27:55.977058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:41.274 [2024-10-16 20:27:55.977077] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:41.274 [2024-10-16 20:27:55.977084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:41.274 [2024-10-16 20:27:55.977091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:41.274 [2024-10-16 20:27:55.977098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:41.274 [2024-10-16 20:27:55.977106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.274 [2024-10-16 20:27:55.977113] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:41.274 [2024-10-16 20:27:55.977121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:41.274 [2024-10-16 20:27:55.977127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.274 [2024-10-16 20:27:55.977139] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:41.274 [2024-10-16 20:27:55.977147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:41.274 [2024-10-16 20:27:55.977156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.274 [2024-10-16 20:27:55.977164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.274 [2024-10-16 20:27:55.977172] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:41.274 [2024-10-16 20:27:55.977180] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:41.274 [2024-10-16 20:27:55.977187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:41.274 [2024-10-16 20:27:55.977194] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:41.274 [2024-10-16 20:27:55.977200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:41.274 [2024-10-16 20:27:55.977207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:41.274 [2024-10-16 20:27:55.977216] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:41.274 [2024-10-16 20:27:55.977227] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.274 [2024-10-16 20:27:55.977235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:41.274 [2024-10-16 20:27:55.977243] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:41.274 [2024-10-16 20:27:55.977251] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:41.274 [2024-10-16 20:27:55.977259] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:41.274 [2024-10-16 20:27:55.977267] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:41.274 [2024-10-16 20:27:55.977274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:41.274 [2024-10-16 20:27:55.977281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:41.274 [2024-10-16 20:27:55.977288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:41.274 [2024-10-16 20:27:55.977295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:41.274 [2024-10-16 20:27:55.977302] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:41.274 [2024-10-16 20:27:55.977309] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:41.274 [2024-10-16 20:27:55.977319] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:41.274 [2024-10-16 20:27:55.977327] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:41.274 [2024-10-16 20:27:55.977334] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:41.274 [2024-10-16 20:27:55.977343] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.274 [2024-10-16 20:27:55.977351] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:41.274 [2024-10-16 20:27:55.977360] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:41.274 [2024-10-16 20:27:55.977368] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:41.274 [2024-10-16 20:27:55.977375] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:41.274 [2024-10-16 20:27:55.977383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.274 [2024-10-16 20:27:55.977390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:41.274 [2024-10-16 20:27:55.977398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:20:41.274 [2024-10-16 20:27:55.977411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.274 [2024-10-16 20:27:55.996470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.274 [2024-10-16 20:27:55.996520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.274 [2024-10-16 20:27:55.996533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.016 ms 00:20:41.274 [2024-10-16 20:27:55.996549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.274 [2024-10-16 20:27:55.996645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.274 [2024-10-16 20:27:55.996654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:41.274 [2024-10-16 20:27:55.996663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:41.274 [2024-10-16 20:27:55.996671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.274 [2024-10-16 20:27:56.045054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.274 [2024-10-16 20:27:56.045141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.274 [2024-10-16 20:27:56.045155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.326 ms 00:20:41.274 [2024-10-16 20:27:56.045164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.274 [2024-10-16 20:27:56.045219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.274 [2024-10-16 20:27:56.045230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:41.274 [2024-10-16 20:27:56.045240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:41.274 [2024-10-16 20:27:56.045248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.274 [2024-10-16 20:27:56.045807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.274 [2024-10-16 20:27:56.045845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:41.274 [2024-10-16 20:27:56.045864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:20:41.274 [2024-10-16 20:27:56.045873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.274 [2024-10-16 20:27:56.046008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.274 [2024-10-16 20:27:56.046018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:41.274 [2024-10-16 20:27:56.046027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:41.274 [2024-10-16 20:27:56.046036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.274 [2024-10-16 20:27:56.063188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.274 [2024-10-16 20:27:56.063236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.274 [2024-10-16 20:27:56.063249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.104 ms 00:20:41.274 [2024-10-16 20:27:56.063258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.275 [2024-10-16 20:27:56.078151] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:41.275 [2024-10-16 20:27:56.078203] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:41.275 [2024-10-16 20:27:56.078216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.275 [2024-10-16 20:27:56.078225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:41.275 [2024-10-16 20:27:56.078236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.841 ms 00:20:41.275 [2024-10-16 20:27:56.078244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.275 [2024-10-16 20:27:56.105203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.275 [2024-10-16 20:27:56.105270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:41.275 [2024-10-16 20:27:56.105285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.901 ms 00:20:41.275 [2024-10-16 20:27:56.105294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.275 [2024-10-16 20:27:56.118853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.275 [2024-10-16 20:27:56.118904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:41.275 [2024-10-16 20:27:56.118916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.489 ms 00:20:41.275 [2024-10-16 20:27:56.118923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.275 [2024-10-16 20:27:56.132449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.275 [2024-10-16 20:27:56.132660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:41.275 [2024-10-16 20:27:56.132696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.473 ms 00:20:41.275 [2024-10-16 20:27:56.132704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.275 [2024-10-16 20:27:56.133136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.275 [2024-10-16 20:27:56.133157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:41.275 [2024-10-16 20:27:56.133169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:20:41.275 [2024-10-16 20:27:56.133178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.536 [2024-10-16 20:27:56.202803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.536 [2024-10-16 20:27:56.202885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:41.536 [2024-10-16 20:27:56.202900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.604 ms 00:20:41.536 [2024-10-16 20:27:56.202909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.536 [2024-10-16 20:27:56.215587] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:41.536 [2024-10-16 20:27:56.219002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.536 [2024-10-16 20:27:56.219260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:41.536 [2024-10-16 20:27:56.219292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.024 ms 00:20:41.536 [2024-10-16 20:27:56.219300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.536 [2024-10-16 20:27:56.219387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.536 [2024-10-16 20:27:56.219399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:41.536 [2024-10-16 20:27:56.219410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:41.536 [2024-10-16 20:27:56.219419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.536 [2024-10-16 20:27:56.220851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.536 [2024-10-16 20:27:56.220909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:41.536 [2024-10-16 20:27:56.220920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:20:41.536 [2024-10-16 20:27:56.220935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.536 [2024-10-16 20:27:56.222363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.536 [2024-10-16 20:27:56.222543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:41.536 [2024-10-16 20:27:56.222563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:20:41.536 [2024-10-16 20:27:56.222572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.536 [2024-10-16 20:27:56.222613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.536 [2024-10-16 20:27:56.222630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:41.536 [2024-10-16 20:27:56.222639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:41.536 [2024-10-16 20:27:56.222646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.536 [2024-10-16 20:27:56.222683] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:41.536 [2024-10-16 20:27:56.222696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.536 [2024-10-16 20:27:56.222705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:41.536 [2024-10-16 20:27:56.222714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:41.536 [2024-10-16 20:27:56.222722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.536 [2024-10-16 20:27:56.249976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.536 [2024-10-16 20:27:56.250190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:41.536 [2024-10-16 20:27:56.250214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.232 ms 00:20:41.536 [2024-10-16 20:27:56.250231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.536 [2024-10-16 20:27:56.250310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.536 [2024-10-16 20:27:56.250320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:41.536 [2024-10-16 20:27:56.250329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:41.536 [2024-10-16 20:27:56.250338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.536 [2024-10-16 20:27:56.255809] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 310.656 ms, result 0 00:20:42.924  [2024-10-16T20:27:58.797Z] Copying: 9988/1048576 [kB] (9988 kBps) [2024-10-16T20:27:59.741Z] Copying: 27/1024 [MB] (17 MBps) [2024-10-16T20:28:00.685Z] Copying: 45/1024 [MB] (18 MBps) [2024-10-16T20:28:01.627Z] Copying: 69/1024 [MB] (24 MBps) [2024-10-16T20:28:02.569Z] Copying: 91/1024 [MB] (22 MBps) [2024-10-16T20:28:03.512Z] Copying: 107/1024 [MB] (15 MBps) [2024-10-16T20:28:04.455Z] Copying: 129/1024 [MB] (22 MBps) [2024-10-16T20:28:05.843Z] Copying: 149/1024 [MB] (19 MBps) [2024-10-16T20:28:06.788Z] Copying: 175/1024 [MB] (25 MBps) [2024-10-16T20:28:07.732Z] Copying: 200/1024 [MB] (24 MBps) [2024-10-16T20:28:08.678Z] Copying: 220/1024 [MB] (20 MBps) [2024-10-16T20:28:09.623Z] Copying: 235/1024 [MB] (14 MBps) [2024-10-16T20:28:10.569Z] Copying: 253/1024 [MB] (17 MBps) [2024-10-16T20:28:11.513Z] Copying: 274/1024 [MB] (21 MBps) [2024-10-16T20:28:12.490Z] Copying: 291/1024 [MB] (16 MBps) [2024-10-16T20:28:13.875Z] Copying: 311/1024 [MB] (19 MBps) [2024-10-16T20:28:14.450Z] Copying: 333/1024 [MB] (21 MBps) [2024-10-16T20:28:15.835Z] Copying: 349/1024 [MB] (16 MBps) [2024-10-16T20:28:16.778Z] Copying: 372/1024 [MB] (22 MBps) [2024-10-16T20:28:17.729Z] Copying: 387/1024 [MB] (15 MBps) [2024-10-16T20:28:18.674Z] Copying: 404/1024 [MB] (16 MBps) [2024-10-16T20:28:19.617Z] Copying: 419/1024 [MB] (15 MBps) [2024-10-16T20:28:20.560Z] Copying: 430/1024 [MB] (11 MBps) [2024-10-16T20:28:21.505Z] Copying: 443/1024 [MB] (12 MBps) [2024-10-16T20:28:22.450Z] Copying: 453/1024 [MB] (10 MBps) [2024-10-16T20:28:23.838Z] Copying: 464/1024 [MB] (10 MBps) [2024-10-16T20:28:24.782Z] Copying: 485/1024 [MB] (20 MBps) [2024-10-16T20:28:25.728Z] Copying: 499/1024 [MB] (14 MBps) [2024-10-16T20:28:26.672Z] Copying: 514/1024 [MB] (14 MBps) [2024-10-16T20:28:27.616Z] Copying: 528/1024 [MB] (14 MBps) [2024-10-16T20:28:28.560Z] Copying: 550/1024 [MB] (22 MBps) [2024-10-16T20:28:29.505Z] Copying: 571/1024 [MB] (21 MBps) [2024-10-16T20:28:30.482Z] Copying: 592/1024 [MB] (20 MBps) [2024-10-16T20:28:31.868Z] Copying: 604/1024 [MB] (12 MBps) [2024-10-16T20:28:32.812Z] Copying: 625/1024 [MB] (20 MBps) [2024-10-16T20:28:33.757Z] Copying: 635/1024 [MB] (10 MBps) [2024-10-16T20:28:34.701Z] Copying: 650/1024 [MB] (14 MBps) [2024-10-16T20:28:35.647Z] Copying: 670/1024 [MB] (20 MBps) [2024-10-16T20:28:36.591Z] Copying: 683/1024 [MB] (12 MBps) [2024-10-16T20:28:37.536Z] Copying: 697/1024 [MB] (14 MBps) [2024-10-16T20:28:38.478Z] Copying: 723/1024 [MB] (25 MBps) [2024-10-16T20:28:39.528Z] Copying: 739/1024 [MB] (16 MBps) [2024-10-16T20:28:40.469Z] Copying: 757/1024 [MB] (17 MBps) [2024-10-16T20:28:41.857Z] Copying: 772/1024 [MB] (15 MBps) [2024-10-16T20:28:42.802Z] Copying: 793/1024 [MB] (20 MBps) [2024-10-16T20:28:43.747Z] Copying: 809/1024 [MB] (16 MBps) [2024-10-16T20:28:44.692Z] Copying: 832/1024 [MB] (22 MBps) [2024-10-16T20:28:45.636Z] Copying: 852/1024 [MB] (20 MBps) [2024-10-16T20:28:46.581Z] Copying: 874/1024 [MB] (21 MBps) [2024-10-16T20:28:47.525Z] Copying: 891/1024 [MB] (16 MBps) [2024-10-16T20:28:48.469Z] Copying: 909/1024 [MB] (18 MBps) [2024-10-16T20:28:49.855Z] Copying: 925/1024 [MB] (16 MBps) [2024-10-16T20:28:50.800Z] Copying: 937/1024 [MB] (11 MBps) [2024-10-16T20:28:51.745Z] Copying: 947/1024 [MB] (10 MBps) [2024-10-16T20:28:52.687Z] Copying: 961/1024 [MB] (13 MBps) [2024-10-16T20:28:53.631Z] Copying: 974/1024 [MB] (13 MBps) [2024-10-16T20:28:54.574Z] Copying: 994/1024 [MB] (20 MBps) [2024-10-16T20:28:55.517Z] Copying: 1010/1024 [MB] (15 MBps) [2024-10-16T20:28:55.778Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-16 20:28:55.659342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-10-16 20:28:55.659677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:40.849 [2024-10-16 20:28:55.659705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:40.849 [2024-10-16 20:28:55.659716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-10-16 20:28:55.659751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:40.849 [2024-10-16 20:28:55.663561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-10-16 20:28:55.663719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:40.849 [2024-10-16 20:28:55.663740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.791 ms 00:21:40.849 [2024-10-16 20:28:55.663749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-10-16 20:28:55.664067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-10-16 20:28:55.664086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:40.849 [2024-10-16 20:28:55.664097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:21:40.849 [2024-10-16 20:28:55.664106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-10-16 20:28:55.672423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-10-16 20:28:55.672545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:40.849 [2024-10-16 20:28:55.672604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.298 ms 00:21:40.849 [2024-10-16 20:28:55.672627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-10-16 20:28:55.679367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-10-16 20:28:55.679493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:40.849 [2024-10-16 20:28:55.679560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.694 ms 00:21:40.849 [2024-10-16 20:28:55.679583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-10-16 20:28:55.706100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-10-16 20:28:55.706263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:40.849 [2024-10-16 20:28:55.706327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.451 ms 00:21:40.849 [2024-10-16 20:28:55.706351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-10-16 20:28:55.722810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-10-16 20:28:55.722969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:40.849 [2024-10-16 20:28:55.723033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.381 ms 00:21:40.849 [2024-10-16 20:28:55.723085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.111 [2024-10-16 20:28:56.009495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.111 [2024-10-16 20:28:56.009661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:41.111 [2024-10-16 20:28:56.009728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 286.353 ms 00:21:41.111 [2024-10-16 20:28:56.009752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.111 [2024-10-16 20:28:56.035498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.111 [2024-10-16 20:28:56.035649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:41.111 [2024-10-16 20:28:56.035713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.702 ms 00:21:41.111 [2024-10-16 20:28:56.035736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.374 [2024-10-16 20:28:56.060729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.374 [2024-10-16 20:28:56.060877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:41.374 [2024-10-16 20:28:56.060950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.947 ms 00:21:41.374 [2024-10-16 20:28:56.060987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.374 [2024-10-16 20:28:56.085393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.374 [2024-10-16 20:28:56.085544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:41.374 [2024-10-16 20:28:56.085604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.360 ms 00:21:41.374 [2024-10-16 20:28:56.085626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.374 [2024-10-16 20:28:56.110291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.374 [2024-10-16 20:28:56.110437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:41.374 [2024-10-16 20:28:56.110497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.578 ms 00:21:41.374 [2024-10-16 20:28:56.110519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.374 [2024-10-16 20:28:56.110563] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:41.374 [2024-10-16 20:28:56.110593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133376 / 261120 wr_cnt: 1 state: open 00:21:41.374 [2024-10-16 20:28:56.110625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.110655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.110683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.110898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.110951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.110980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.111523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-10-16 20:28:56.112732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.112998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-10-16 20:28:56.113735] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:41.375 [2024-10-16 20:28:56.113744] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d899f785-ee45-4182-91c8-1d95f73f7d3d 00:21:41.375 [2024-10-16 20:28:56.113752] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133376 00:21:41.375 [2024-10-16 20:28:56.113760] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 43456 00:21:41.375 [2024-10-16 20:28:56.113774] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 42496 00:21:41.375 [2024-10-16 20:28:56.113783] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0226 00:21:41.375 [2024-10-16 20:28:56.113790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:41.375 [2024-10-16 20:28:56.113798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:41.375 [2024-10-16 20:28:56.113805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:41.375 [2024-10-16 20:28:56.113812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:41.375 [2024-10-16 20:28:56.113826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:41.375 [2024-10-16 20:28:56.113837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.375 [2024-10-16 20:28:56.113849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:41.375 [2024-10-16 20:28:56.113858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:21:41.375 [2024-10-16 20:28:56.113866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-10-16 20:28:56.127125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.375 [2024-10-16 20:28:56.127162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:41.375 [2024-10-16 20:28:56.127173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.206 ms 00:21:41.375 [2024-10-16 20:28:56.127181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-10-16 20:28:56.127409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.375 [2024-10-16 20:28:56.127418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:41.375 [2024-10-16 20:28:56.127426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:21:41.375 [2024-10-16 20:28:56.127434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-10-16 20:28:56.166463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-10-16 20:28:56.166496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.375 [2024-10-16 20:28:56.166507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-10-16 20:28:56.166515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-10-16 20:28:56.166582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-10-16 20:28:56.166591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.375 [2024-10-16 20:28:56.166599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-10-16 20:28:56.166607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-10-16 20:28:56.166689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-10-16 20:28:56.166700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.375 [2024-10-16 20:28:56.166708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-10-16 20:28:56.166716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-10-16 20:28:56.166731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-10-16 20:28:56.166740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.375 [2024-10-16 20:28:56.166748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.376 [2024-10-16 20:28:56.166756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.376 [2024-10-16 20:28:56.246512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.376 [2024-10-16 20:28:56.246744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.376 [2024-10-16 20:28:56.246767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.376 [2024-10-16 20:28:56.246776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.376 [2024-10-16 20:28:56.277818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.376 [2024-10-16 20:28:56.277976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.376 [2024-10-16 20:28:56.277995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.376 [2024-10-16 20:28:56.278005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.376 [2024-10-16 20:28:56.278095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.376 [2024-10-16 20:28:56.278113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.376 [2024-10-16 20:28:56.278122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.376 [2024-10-16 20:28:56.278131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.376 [2024-10-16 20:28:56.278172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.376 [2024-10-16 20:28:56.278183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.376 [2024-10-16 20:28:56.278193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.376 [2024-10-16 20:28:56.278201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.376 [2024-10-16 20:28:56.278306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.376 [2024-10-16 20:28:56.278320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.376 [2024-10-16 20:28:56.278332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.376 [2024-10-16 20:28:56.278341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.376 [2024-10-16 20:28:56.278373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.376 [2024-10-16 20:28:56.278385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:41.376 [2024-10-16 20:28:56.278395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.376 [2024-10-16 20:28:56.278404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.376 [2024-10-16 20:28:56.278446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.376 [2024-10-16 20:28:56.278457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.376 [2024-10-16 20:28:56.278469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.376 [2024-10-16 20:28:56.278478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.376 [2024-10-16 20:28:56.278526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.376 [2024-10-16 20:28:56.278539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.376 [2024-10-16 20:28:56.278547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.376 [2024-10-16 20:28:56.278556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.376 [2024-10-16 20:28:56.278690] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 619.319 ms, result 0 00:21:42.318 00:21:42.318 00:21:42.318 20:28:57 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:44.866 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:44.866 20:28:59 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:44.866 20:28:59 -- ftl/restore.sh@85 -- # restore_kill 00:21:44.866 20:28:59 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:44.866 20:28:59 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:44.866 20:28:59 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:44.866 Process with pid 72896 is not found 00:21:44.866 20:28:59 -- ftl/restore.sh@32 -- # killprocess 72896 00:21:44.866 20:28:59 -- common/autotest_common.sh@926 -- # '[' -z 72896 ']' 00:21:44.866 20:28:59 -- common/autotest_common.sh@930 -- # kill -0 72896 00:21:44.866 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (72896) - No such process 00:21:44.866 20:28:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 72896 is not found' 00:21:44.866 20:28:59 -- ftl/restore.sh@33 -- # remove_shm 00:21:44.866 Remove shared memory files 00:21:44.866 20:28:59 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:44.866 20:28:59 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:44.866 20:28:59 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:44.866 20:28:59 -- ftl/common.sh@207 -- # rm -f rm -f 00:21:44.866 20:28:59 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:44.866 20:28:59 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:44.866 ************************************ 00:21:44.866 END TEST ftl_restore 00:21:44.866 ************************************ 00:21:44.866 00:21:44.866 real 4m14.063s 00:21:44.866 user 4m1.304s 00:21:44.866 sys 0m12.461s 00:21:44.866 20:28:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.866 20:28:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.866 20:28:59 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:44.866 20:28:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:21:44.866 20:28:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:44.866 20:28:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.866 ************************************ 00:21:44.866 START TEST ftl_dirty_shutdown 00:21:44.866 ************************************ 00:21:44.866 20:28:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:44.866 * Looking for test storage... 00:21:44.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:44.866 20:28:59 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:44.866 20:28:59 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:44.866 20:28:59 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:44.866 20:28:59 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:44.866 20:28:59 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:44.866 20:28:59 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.866 20:28:59 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:44.866 20:28:59 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:44.866 20:28:59 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.866 20:28:59 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.866 20:28:59 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:44.866 20:28:59 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:44.866 20:28:59 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:44.866 20:28:59 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:44.866 20:28:59 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:44.866 20:28:59 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:44.866 20:28:59 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.866 20:28:59 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.866 20:28:59 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:44.866 20:28:59 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:44.866 20:28:59 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:44.866 20:28:59 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:44.866 20:28:59 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:44.866 20:28:59 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:44.866 20:28:59 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:44.866 20:28:59 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:44.866 20:28:59 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:44.866 20:28:59 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75638 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75638 00:21:44.866 20:28:59 -- common/autotest_common.sh@819 -- # '[' -z 75638 ']' 00:21:44.866 20:28:59 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:44.866 20:28:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.866 20:28:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:44.866 20:28:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.866 20:28:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:44.866 20:28:59 -- common/autotest_common.sh@10 -- # set +x 00:21:44.866 [2024-10-16 20:28:59.627205] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:44.866 [2024-10-16 20:28:59.627318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75638 ] 00:21:44.866 [2024-10-16 20:28:59.773371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.128 [2024-10-16 20:28:59.957890] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:45.128 [2024-10-16 20:28:59.958133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.518 20:29:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:46.518 20:29:01 -- common/autotest_common.sh@852 -- # return 0 00:21:46.518 20:29:01 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:21:46.518 20:29:01 -- ftl/common.sh@54 -- # local name=nvme0 00:21:46.518 20:29:01 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:21:46.518 20:29:01 -- ftl/common.sh@56 -- # local size=103424 00:21:46.518 20:29:01 -- ftl/common.sh@59 -- # local base_bdev 00:21:46.518 20:29:01 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:21:46.518 20:29:01 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:46.518 20:29:01 -- ftl/common.sh@62 -- # local base_size 00:21:46.518 20:29:01 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:46.518 20:29:01 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:21:46.518 20:29:01 -- common/autotest_common.sh@1358 -- # local bdev_info 00:21:46.518 20:29:01 -- common/autotest_common.sh@1359 -- # local bs 00:21:46.518 20:29:01 -- common/autotest_common.sh@1360 -- # local nb 00:21:46.518 20:29:01 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:46.779 20:29:01 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:21:46.779 { 00:21:46.779 "name": "nvme0n1", 00:21:46.779 "aliases": [ 00:21:46.779 "5a4b7942-1749-43d7-91e1-af0302db3291" 00:21:46.779 ], 00:21:46.779 "product_name": "NVMe disk", 00:21:46.779 "block_size": 4096, 00:21:46.779 "num_blocks": 1310720, 00:21:46.779 "uuid": "5a4b7942-1749-43d7-91e1-af0302db3291", 00:21:46.779 "assigned_rate_limits": { 00:21:46.779 "rw_ios_per_sec": 0, 00:21:46.779 "rw_mbytes_per_sec": 0, 00:21:46.779 "r_mbytes_per_sec": 0, 00:21:46.779 "w_mbytes_per_sec": 0 00:21:46.779 }, 00:21:46.779 "claimed": true, 00:21:46.779 "claim_type": "read_many_write_one", 00:21:46.779 "zoned": false, 00:21:46.779 "supported_io_types": { 00:21:46.779 "read": true, 00:21:46.779 "write": true, 00:21:46.779 "unmap": true, 00:21:46.779 "write_zeroes": true, 00:21:46.779 "flush": true, 00:21:46.779 "reset": true, 00:21:46.779 "compare": true, 00:21:46.779 "compare_and_write": false, 00:21:46.779 "abort": true, 00:21:46.779 "nvme_admin": true, 00:21:46.779 "nvme_io": true 00:21:46.779 }, 00:21:46.779 "driver_specific": { 00:21:46.779 "nvme": [ 00:21:46.779 { 00:21:46.779 "pci_address": "0000:00:07.0", 00:21:46.779 "trid": { 00:21:46.779 "trtype": "PCIe", 00:21:46.779 "traddr": "0000:00:07.0" 00:21:46.779 }, 00:21:46.779 "ctrlr_data": { 00:21:46.779 "cntlid": 0, 00:21:46.779 "vendor_id": "0x1b36", 00:21:46.779 "model_number": "QEMU NVMe Ctrl", 00:21:46.779 "serial_number": "12341", 00:21:46.779 "firmware_revision": "8.0.0", 00:21:46.779 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:46.779 "oacs": { 00:21:46.779 "security": 0, 00:21:46.779 "format": 1, 00:21:46.779 "firmware": 0, 00:21:46.779 "ns_manage": 1 00:21:46.779 }, 00:21:46.779 "multi_ctrlr": false, 00:21:46.779 "ana_reporting": false 00:21:46.779 }, 00:21:46.779 "vs": { 00:21:46.779 "nvme_version": "1.4" 00:21:46.779 }, 00:21:46.779 "ns_data": { 00:21:46.779 "id": 1, 00:21:46.779 "can_share": false 00:21:46.779 } 00:21:46.779 } 00:21:46.779 ], 00:21:46.779 "mp_policy": "active_passive" 00:21:46.779 } 00:21:46.779 } 00:21:46.779 ]' 00:21:46.779 20:29:01 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:21:46.779 20:29:01 -- common/autotest_common.sh@1362 -- # bs=4096 00:21:46.779 20:29:01 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:21:46.779 20:29:01 -- common/autotest_common.sh@1363 -- # nb=1310720 00:21:46.779 20:29:01 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:21:46.779 20:29:01 -- common/autotest_common.sh@1367 -- # echo 5120 00:21:46.779 20:29:01 -- ftl/common.sh@63 -- # base_size=5120 00:21:46.779 20:29:01 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:46.779 20:29:01 -- ftl/common.sh@67 -- # clear_lvols 00:21:46.779 20:29:01 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:46.779 20:29:01 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:47.057 20:29:01 -- ftl/common.sh@28 -- # stores=54d14d06-c2a6-45f4-bde8-7aa430cd5691 00:21:47.057 20:29:01 -- ftl/common.sh@29 -- # for lvs in $stores 00:21:47.057 20:29:01 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54d14d06-c2a6-45f4-bde8-7aa430cd5691 00:21:47.057 20:29:01 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:47.324 20:29:02 -- ftl/common.sh@68 -- # lvs=05fe023f-2c5c-4d65-8bbb-02731182f4c9 00:21:47.324 20:29:02 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 05fe023f-2c5c-4d65-8bbb-02731182f4c9 00:21:47.585 20:29:02 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:47.585 20:29:02 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:21:47.585 20:29:02 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:47.585 20:29:02 -- ftl/common.sh@35 -- # local name=nvc0 00:21:47.585 20:29:02 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:21:47.585 20:29:02 -- ftl/common.sh@37 -- # local base_bdev=db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:47.585 20:29:02 -- ftl/common.sh@38 -- # local cache_size= 00:21:47.585 20:29:02 -- ftl/common.sh@41 -- # get_bdev_size db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:47.585 20:29:02 -- common/autotest_common.sh@1357 -- # local bdev_name=db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:47.585 20:29:02 -- common/autotest_common.sh@1358 -- # local bdev_info 00:21:47.585 20:29:02 -- common/autotest_common.sh@1359 -- # local bs 00:21:47.585 20:29:02 -- common/autotest_common.sh@1360 -- # local nb 00:21:47.585 20:29:02 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:47.846 20:29:02 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:21:47.846 { 00:21:47.846 "name": "db9386bd-8ce3-4fa8-81fe-7e2199303f44", 00:21:47.846 "aliases": [ 00:21:47.846 "lvs/nvme0n1p0" 00:21:47.846 ], 00:21:47.846 "product_name": "Logical Volume", 00:21:47.846 "block_size": 4096, 00:21:47.846 "num_blocks": 26476544, 00:21:47.846 "uuid": "db9386bd-8ce3-4fa8-81fe-7e2199303f44", 00:21:47.846 "assigned_rate_limits": { 00:21:47.846 "rw_ios_per_sec": 0, 00:21:47.846 "rw_mbytes_per_sec": 0, 00:21:47.846 "r_mbytes_per_sec": 0, 00:21:47.846 "w_mbytes_per_sec": 0 00:21:47.846 }, 00:21:47.846 "claimed": false, 00:21:47.846 "zoned": false, 00:21:47.846 "supported_io_types": { 00:21:47.846 "read": true, 00:21:47.846 "write": true, 00:21:47.846 "unmap": true, 00:21:47.846 "write_zeroes": true, 00:21:47.846 "flush": false, 00:21:47.846 "reset": true, 00:21:47.846 "compare": false, 00:21:47.846 "compare_and_write": false, 00:21:47.846 "abort": false, 00:21:47.846 "nvme_admin": false, 00:21:47.846 "nvme_io": false 00:21:47.846 }, 00:21:47.846 "driver_specific": { 00:21:47.846 "lvol": { 00:21:47.846 "lvol_store_uuid": "05fe023f-2c5c-4d65-8bbb-02731182f4c9", 00:21:47.846 "base_bdev": "nvme0n1", 00:21:47.846 "thin_provision": true, 00:21:47.846 "snapshot": false, 00:21:47.846 "clone": false, 00:21:47.846 "esnap_clone": false 00:21:47.846 } 00:21:47.846 } 00:21:47.846 } 00:21:47.846 ]' 00:21:47.846 20:29:02 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:21:47.846 20:29:02 -- common/autotest_common.sh@1362 -- # bs=4096 00:21:47.846 20:29:02 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:21:47.846 20:29:02 -- common/autotest_common.sh@1363 -- # nb=26476544 00:21:47.846 20:29:02 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:21:47.846 20:29:02 -- common/autotest_common.sh@1367 -- # echo 103424 00:21:47.846 20:29:02 -- ftl/common.sh@41 -- # local base_size=5171 00:21:47.846 20:29:02 -- ftl/common.sh@44 -- # local nvc_bdev 00:21:47.846 20:29:02 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:21:48.107 20:29:02 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:48.107 20:29:02 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:48.107 20:29:02 -- ftl/common.sh@48 -- # get_bdev_size db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:48.107 20:29:02 -- common/autotest_common.sh@1357 -- # local bdev_name=db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:48.107 20:29:02 -- common/autotest_common.sh@1358 -- # local bdev_info 00:21:48.107 20:29:02 -- common/autotest_common.sh@1359 -- # local bs 00:21:48.107 20:29:02 -- common/autotest_common.sh@1360 -- # local nb 00:21:48.107 20:29:02 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:48.368 20:29:03 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:21:48.368 { 00:21:48.368 "name": "db9386bd-8ce3-4fa8-81fe-7e2199303f44", 00:21:48.368 "aliases": [ 00:21:48.368 "lvs/nvme0n1p0" 00:21:48.368 ], 00:21:48.368 "product_name": "Logical Volume", 00:21:48.368 "block_size": 4096, 00:21:48.368 "num_blocks": 26476544, 00:21:48.368 "uuid": "db9386bd-8ce3-4fa8-81fe-7e2199303f44", 00:21:48.368 "assigned_rate_limits": { 00:21:48.368 "rw_ios_per_sec": 0, 00:21:48.368 "rw_mbytes_per_sec": 0, 00:21:48.368 "r_mbytes_per_sec": 0, 00:21:48.368 "w_mbytes_per_sec": 0 00:21:48.368 }, 00:21:48.368 "claimed": false, 00:21:48.368 "zoned": false, 00:21:48.368 "supported_io_types": { 00:21:48.368 "read": true, 00:21:48.368 "write": true, 00:21:48.368 "unmap": true, 00:21:48.368 "write_zeroes": true, 00:21:48.368 "flush": false, 00:21:48.368 "reset": true, 00:21:48.368 "compare": false, 00:21:48.368 "compare_and_write": false, 00:21:48.368 "abort": false, 00:21:48.368 "nvme_admin": false, 00:21:48.368 "nvme_io": false 00:21:48.368 }, 00:21:48.368 "driver_specific": { 00:21:48.368 "lvol": { 00:21:48.368 "lvol_store_uuid": "05fe023f-2c5c-4d65-8bbb-02731182f4c9", 00:21:48.368 "base_bdev": "nvme0n1", 00:21:48.368 "thin_provision": true, 00:21:48.368 "snapshot": false, 00:21:48.368 "clone": false, 00:21:48.368 "esnap_clone": false 00:21:48.368 } 00:21:48.368 } 00:21:48.368 } 00:21:48.368 ]' 00:21:48.368 20:29:03 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:21:48.368 20:29:03 -- common/autotest_common.sh@1362 -- # bs=4096 00:21:48.368 20:29:03 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:21:48.368 20:29:03 -- common/autotest_common.sh@1363 -- # nb=26476544 00:21:48.368 20:29:03 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:21:48.368 20:29:03 -- common/autotest_common.sh@1367 -- # echo 103424 00:21:48.368 20:29:03 -- ftl/common.sh@48 -- # cache_size=5171 00:21:48.368 20:29:03 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:48.629 20:29:03 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:48.629 20:29:03 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:48.629 20:29:03 -- common/autotest_common.sh@1357 -- # local bdev_name=db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:48.629 20:29:03 -- common/autotest_common.sh@1358 -- # local bdev_info 00:21:48.629 20:29:03 -- common/autotest_common.sh@1359 -- # local bs 00:21:48.629 20:29:03 -- common/autotest_common.sh@1360 -- # local nb 00:21:48.629 20:29:03 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db9386bd-8ce3-4fa8-81fe-7e2199303f44 00:21:48.629 20:29:03 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:21:48.629 { 00:21:48.629 "name": "db9386bd-8ce3-4fa8-81fe-7e2199303f44", 00:21:48.629 "aliases": [ 00:21:48.629 "lvs/nvme0n1p0" 00:21:48.629 ], 00:21:48.629 "product_name": "Logical Volume", 00:21:48.629 "block_size": 4096, 00:21:48.629 "num_blocks": 26476544, 00:21:48.629 "uuid": "db9386bd-8ce3-4fa8-81fe-7e2199303f44", 00:21:48.629 "assigned_rate_limits": { 00:21:48.629 "rw_ios_per_sec": 0, 00:21:48.629 "rw_mbytes_per_sec": 0, 00:21:48.629 "r_mbytes_per_sec": 0, 00:21:48.629 "w_mbytes_per_sec": 0 00:21:48.629 }, 00:21:48.629 "claimed": false, 00:21:48.629 "zoned": false, 00:21:48.629 "supported_io_types": { 00:21:48.629 "read": true, 00:21:48.629 "write": true, 00:21:48.629 "unmap": true, 00:21:48.629 "write_zeroes": true, 00:21:48.629 "flush": false, 00:21:48.629 "reset": true, 00:21:48.629 "compare": false, 00:21:48.629 "compare_and_write": false, 00:21:48.629 "abort": false, 00:21:48.629 "nvme_admin": false, 00:21:48.629 "nvme_io": false 00:21:48.629 }, 00:21:48.629 "driver_specific": { 00:21:48.629 "lvol": { 00:21:48.629 "lvol_store_uuid": "05fe023f-2c5c-4d65-8bbb-02731182f4c9", 00:21:48.629 "base_bdev": "nvme0n1", 00:21:48.629 "thin_provision": true, 00:21:48.629 "snapshot": false, 00:21:48.629 "clone": false, 00:21:48.629 "esnap_clone": false 00:21:48.629 } 00:21:48.629 } 00:21:48.629 } 00:21:48.629 ]' 00:21:48.629 20:29:03 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:21:48.629 20:29:03 -- common/autotest_common.sh@1362 -- # bs=4096 00:21:48.629 20:29:03 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:21:48.629 20:29:03 -- common/autotest_common.sh@1363 -- # nb=26476544 00:21:48.629 20:29:03 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:21:48.629 20:29:03 -- common/autotest_common.sh@1367 -- # echo 103424 00:21:48.629 20:29:03 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:48.629 20:29:03 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d db9386bd-8ce3-4fa8-81fe-7e2199303f44 --l2p_dram_limit 10' 00:21:48.629 20:29:03 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:48.629 20:29:03 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:21:48.629 20:29:03 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:48.629 20:29:03 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d db9386bd-8ce3-4fa8-81fe-7e2199303f44 --l2p_dram_limit 10 -c nvc0n1p0 00:21:48.892 [2024-10-16 20:29:03.683357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.892 [2024-10-16 20:29:03.683396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:48.892 [2024-10-16 20:29:03.683409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:48.892 [2024-10-16 20:29:03.683417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.892 [2024-10-16 20:29:03.683458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.892 [2024-10-16 20:29:03.683465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.892 [2024-10-16 20:29:03.683473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:48.892 [2024-10-16 20:29:03.683479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.892 [2024-10-16 20:29:03.683495] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:48.892 [2024-10-16 20:29:03.684093] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:48.892 [2024-10-16 20:29:03.684110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.892 [2024-10-16 20:29:03.684117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.892 [2024-10-16 20:29:03.684125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:21:48.892 [2024-10-16 20:29:03.684131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.892 [2024-10-16 20:29:03.684156] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 86454fe5-dc7e-4923-91b2-585fdda5c058 00:21:48.892 [2024-10-16 20:29:03.685154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.892 [2024-10-16 20:29:03.685178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:48.892 [2024-10-16 20:29:03.685186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:48.892 [2024-10-16 20:29:03.685194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.892 [2024-10-16 20:29:03.689934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.892 [2024-10-16 20:29:03.689961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.892 [2024-10-16 20:29:03.689970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.704 ms 00:21:48.892 [2024-10-16 20:29:03.689977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.892 [2024-10-16 20:29:03.690086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.892 [2024-10-16 20:29:03.690096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.892 [2024-10-16 20:29:03.690103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:48.892 [2024-10-16 20:29:03.690113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.892 [2024-10-16 20:29:03.690150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.892 [2024-10-16 20:29:03.690159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:48.892 [2024-10-16 20:29:03.690165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:48.892 [2024-10-16 20:29:03.690172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.892 [2024-10-16 20:29:03.690190] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:48.892 [2024-10-16 20:29:03.693120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.892 [2024-10-16 20:29:03.693143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.892 [2024-10-16 20:29:03.693154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:21:48.892 [2024-10-16 20:29:03.693160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.892 [2024-10-16 20:29:03.693189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.892 [2024-10-16 20:29:03.693195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:48.892 [2024-10-16 20:29:03.693202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:48.892 [2024-10-16 20:29:03.693208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.892 [2024-10-16 20:29:03.693229] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:48.892 [2024-10-16 20:29:03.693313] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:48.892 [2024-10-16 20:29:03.693326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:48.892 [2024-10-16 20:29:03.693334] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:48.892 [2024-10-16 20:29:03.693344] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693352] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693360] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:48.893 [2024-10-16 20:29:03.693371] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:48.893 [2024-10-16 20:29:03.693379] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:48.893 [2024-10-16 20:29:03.693384] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:48.893 [2024-10-16 20:29:03.693391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.893 [2024-10-16 20:29:03.693397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:48.893 [2024-10-16 20:29:03.693405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:21:48.893 [2024-10-16 20:29:03.693410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.893 [2024-10-16 20:29:03.693459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.893 [2024-10-16 20:29:03.693465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:48.893 [2024-10-16 20:29:03.693474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:48.893 [2024-10-16 20:29:03.693480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.893 [2024-10-16 20:29:03.693536] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:48.893 [2024-10-16 20:29:03.693543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:48.893 [2024-10-16 20:29:03.693551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:48.893 [2024-10-16 20:29:03.693571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:48.893 [2024-10-16 20:29:03.693589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.893 [2024-10-16 20:29:03.693603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:48.893 [2024-10-16 20:29:03.693609] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:48.893 [2024-10-16 20:29:03.693615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.893 [2024-10-16 20:29:03.693620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:48.893 [2024-10-16 20:29:03.693628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:48.893 [2024-10-16 20:29:03.693633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:48.893 [2024-10-16 20:29:03.693646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:48.893 [2024-10-16 20:29:03.693652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:48.893 [2024-10-16 20:29:03.693664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:48.893 [2024-10-16 20:29:03.693669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:48.893 [2024-10-16 20:29:03.693680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:48.893 [2024-10-16 20:29:03.693697] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693709] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:48.893 [2024-10-16 20:29:03.693714] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:48.893 [2024-10-16 20:29:03.693733] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:48.893 [2024-10-16 20:29:03.693748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.893 [2024-10-16 20:29:03.693759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:48.893 [2024-10-16 20:29:03.693765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:48.893 [2024-10-16 20:29:03.693771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.893 [2024-10-16 20:29:03.693779] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:48.893 [2024-10-16 20:29:03.693785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:48.893 [2024-10-16 20:29:03.693792] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.893 [2024-10-16 20:29:03.693807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:48.893 [2024-10-16 20:29:03.693812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:48.893 [2024-10-16 20:29:03.693819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:48.893 [2024-10-16 20:29:03.693824] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:48.893 [2024-10-16 20:29:03.693832] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:48.893 [2024-10-16 20:29:03.693837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:48.893 [2024-10-16 20:29:03.693844] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:48.893 [2024-10-16 20:29:03.693851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.893 [2024-10-16 20:29:03.693859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:48.893 [2024-10-16 20:29:03.693866] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:48.893 [2024-10-16 20:29:03.693872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:48.893 [2024-10-16 20:29:03.693878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:48.893 [2024-10-16 20:29:03.693884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:48.893 [2024-10-16 20:29:03.693890] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:48.893 [2024-10-16 20:29:03.693896] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:48.893 [2024-10-16 20:29:03.693902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:48.893 [2024-10-16 20:29:03.693908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:48.893 [2024-10-16 20:29:03.693914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:48.893 [2024-10-16 20:29:03.693920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:48.893 [2024-10-16 20:29:03.693926] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:48.893 [2024-10-16 20:29:03.693935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:48.893 [2024-10-16 20:29:03.693940] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:48.893 [2024-10-16 20:29:03.693948] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.893 [2024-10-16 20:29:03.693955] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:48.893 [2024-10-16 20:29:03.693962] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:48.893 [2024-10-16 20:29:03.693968] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:48.893 [2024-10-16 20:29:03.693974] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:48.893 [2024-10-16 20:29:03.693980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.893 [2024-10-16 20:29:03.693987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:48.893 [2024-10-16 20:29:03.693992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:21:48.893 [2024-10-16 20:29:03.694000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.893 [2024-10-16 20:29:03.706024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.893 [2024-10-16 20:29:03.706069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.893 [2024-10-16 20:29:03.706077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.994 ms 00:21:48.893 [2024-10-16 20:29:03.706084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.893 [2024-10-16 20:29:03.706163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.893 [2024-10-16 20:29:03.706175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:48.893 [2024-10-16 20:29:03.706182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:48.893 [2024-10-16 20:29:03.706188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.893 [2024-10-16 20:29:03.729953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.893 [2024-10-16 20:29:03.729981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.893 [2024-10-16 20:29:03.729990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.739 ms 00:21:48.893 [2024-10-16 20:29:03.730000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.893 [2024-10-16 20:29:03.730022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.893 [2024-10-16 20:29:03.730030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.893 [2024-10-16 20:29:03.730038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:48.893 [2024-10-16 20:29:03.730062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.893 [2024-10-16 20:29:03.730353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.893 [2024-10-16 20:29:03.730376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.893 [2024-10-16 20:29:03.730383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:21:48.894 [2024-10-16 20:29:03.730391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.894 [2024-10-16 20:29:03.730475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.894 [2024-10-16 20:29:03.730486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.894 [2024-10-16 20:29:03.730492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:48.894 [2024-10-16 20:29:03.730500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.894 [2024-10-16 20:29:03.742463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.894 [2024-10-16 20:29:03.742489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.894 [2024-10-16 20:29:03.742496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.947 ms 00:21:48.894 [2024-10-16 20:29:03.742505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.894 [2024-10-16 20:29:03.751430] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:48.894 [2024-10-16 20:29:03.753709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.894 [2024-10-16 20:29:03.753734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:48.894 [2024-10-16 20:29:03.753744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.150 ms 00:21:48.894 [2024-10-16 20:29:03.753751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.155 [2024-10-16 20:29:03.823864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.155 [2024-10-16 20:29:03.823893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:49.155 [2024-10-16 20:29:03.823904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.090 ms 00:21:49.155 [2024-10-16 20:29:03.823911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.155 [2024-10-16 20:29:03.823946] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:21:49.155 [2024-10-16 20:29:03.823955] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:21:53.360 [2024-10-16 20:29:07.704476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.704534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:53.360 [2024-10-16 20:29:07.704553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3880.514 ms 00:21:53.360 [2024-10-16 20:29:07.704562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.704749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.704764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:53.360 [2024-10-16 20:29:07.704775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:21:53.360 [2024-10-16 20:29:07.704784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.730116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.730289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:53.360 [2024-10-16 20:29:07.730316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.281 ms 00:21:53.360 [2024-10-16 20:29:07.730325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.755221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.755267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:53.360 [2024-10-16 20:29:07.755286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.857 ms 00:21:53.360 [2024-10-16 20:29:07.755294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.755610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.755622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:53.360 [2024-10-16 20:29:07.755634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:21:53.360 [2024-10-16 20:29:07.755645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.830329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.830378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:53.360 [2024-10-16 20:29:07.830395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.639 ms 00:21:53.360 [2024-10-16 20:29:07.830404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.858428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.858475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:53.360 [2024-10-16 20:29:07.858491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.967 ms 00:21:53.360 [2024-10-16 20:29:07.858500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.859989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.860039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:53.360 [2024-10-16 20:29:07.860075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:21:53.360 [2024-10-16 20:29:07.860083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.886496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.886543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:53.360 [2024-10-16 20:29:07.886559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.351 ms 00:21:53.360 [2024-10-16 20:29:07.886567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.886626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.886636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:53.360 [2024-10-16 20:29:07.886649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:53.360 [2024-10-16 20:29:07.886660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.886765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.360 [2024-10-16 20:29:07.886778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:53.360 [2024-10-16 20:29:07.886790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:53.360 [2024-10-16 20:29:07.886799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.360 [2024-10-16 20:29:07.887931] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4204.067 ms, result 0 00:21:53.360 { 00:21:53.360 "name": "ftl0", 00:21:53.360 "uuid": "86454fe5-dc7e-4923-91b2-585fdda5c058" 00:21:53.360 } 00:21:53.360 20:29:07 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:53.360 20:29:07 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:53.360 20:29:08 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:53.360 20:29:08 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:53.360 20:29:08 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:53.360 /dev/nbd0 00:21:53.360 20:29:08 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:53.360 20:29:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:53.360 20:29:08 -- common/autotest_common.sh@857 -- # local i 00:21:53.360 20:29:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:53.360 20:29:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:53.360 20:29:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:53.360 20:29:08 -- common/autotest_common.sh@861 -- # break 00:21:53.360 20:29:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:53.360 20:29:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:53.360 20:29:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:53.360 1+0 records in 00:21:53.360 1+0 records out 00:21:53.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521857 s, 7.8 MB/s 00:21:53.360 20:29:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:53.621 20:29:08 -- common/autotest_common.sh@874 -- # size=4096 00:21:53.621 20:29:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:53.621 20:29:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:53.621 20:29:08 -- common/autotest_common.sh@877 -- # return 0 00:21:53.621 20:29:08 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:53.621 [2024-10-16 20:29:08.350527] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:53.621 [2024-10-16 20:29:08.350635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75793 ] 00:21:53.621 [2024-10-16 20:29:08.501649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.885 [2024-10-16 20:29:08.696090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.319  [2024-10-16T20:29:11.191Z] Copying: 195/1024 [MB] (195 MBps) [2024-10-16T20:29:12.133Z] Copying: 431/1024 [MB] (235 MBps) [2024-10-16T20:29:13.074Z] Copying: 691/1024 [MB] (260 MBps) [2024-10-16T20:29:13.335Z] Copying: 948/1024 [MB] (257 MBps) [2024-10-16T20:29:13.907Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:21:58.978 00:21:58.978 20:29:13 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:00.892 20:29:15 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:01.153 [2024-10-16 20:29:15.848146] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:01.153 [2024-10-16 20:29:15.848257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75874 ] 00:22:01.153 [2024-10-16 20:29:15.998952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.414 [2024-10-16 20:29:16.136909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.799  [2024-10-16T20:29:18.671Z] Copying: 20/1024 [MB] (20 MBps) [2024-10-16T20:29:19.615Z] Copying: 36/1024 [MB] (16 MBps) [2024-10-16T20:29:20.559Z] Copying: 55/1024 [MB] (18 MBps) [2024-10-16T20:29:21.502Z] Copying: 69/1024 [MB] (14 MBps) [2024-10-16T20:29:22.446Z] Copying: 86/1024 [MB] (16 MBps) [2024-10-16T20:29:23.390Z] Copying: 100/1024 [MB] (14 MBps) [2024-10-16T20:29:24.334Z] Copying: 116/1024 [MB] (15 MBps) [2024-10-16T20:29:25.721Z] Copying: 132/1024 [MB] (16 MBps) [2024-10-16T20:29:26.323Z] Copying: 152/1024 [MB] (19 MBps) [2024-10-16T20:29:27.710Z] Copying: 172/1024 [MB] (19 MBps) [2024-10-16T20:29:28.654Z] Copying: 190/1024 [MB] (17 MBps) [2024-10-16T20:29:29.599Z] Copying: 205/1024 [MB] (15 MBps) [2024-10-16T20:29:30.543Z] Copying: 220/1024 [MB] (14 MBps) [2024-10-16T20:29:31.487Z] Copying: 236/1024 [MB] (15 MBps) [2024-10-16T20:29:32.432Z] Copying: 252/1024 [MB] (16 MBps) [2024-10-16T20:29:33.376Z] Copying: 265/1024 [MB] (13 MBps) [2024-10-16T20:29:34.320Z] Copying: 278/1024 [MB] (12 MBps) [2024-10-16T20:29:35.706Z] Copying: 290/1024 [MB] (12 MBps) [2024-10-16T20:29:36.650Z] Copying: 309/1024 [MB] (19 MBps) [2024-10-16T20:29:37.593Z] Copying: 325/1024 [MB] (15 MBps) [2024-10-16T20:29:38.536Z] Copying: 338/1024 [MB] (12 MBps) [2024-10-16T20:29:39.480Z] Copying: 355/1024 [MB] (17 MBps) [2024-10-16T20:29:40.422Z] Copying: 370/1024 [MB] (14 MBps) [2024-10-16T20:29:41.367Z] Copying: 384/1024 [MB] (14 MBps) [2024-10-16T20:29:42.311Z] Copying: 399/1024 [MB] (15 MBps) [2024-10-16T20:29:43.699Z] Copying: 419/1024 [MB] (19 MBps) [2024-10-16T20:29:44.645Z] Copying: 436/1024 [MB] (16 MBps) [2024-10-16T20:29:45.590Z] Copying: 452/1024 [MB] (15 MBps) [2024-10-16T20:29:46.534Z] Copying: 465/1024 [MB] (13 MBps) [2024-10-16T20:29:47.493Z] Copying: 478/1024 [MB] (13 MBps) [2024-10-16T20:29:48.443Z] Copying: 492/1024 [MB] (13 MBps) [2024-10-16T20:29:49.386Z] Copying: 506/1024 [MB] (14 MBps) [2024-10-16T20:29:50.328Z] Copying: 522/1024 [MB] (15 MBps) [2024-10-16T20:29:51.714Z] Copying: 538/1024 [MB] (16 MBps) [2024-10-16T20:29:52.658Z] Copying: 558/1024 [MB] (19 MBps) [2024-10-16T20:29:53.602Z] Copying: 571/1024 [MB] (12 MBps) [2024-10-16T20:29:54.546Z] Copying: 591/1024 [MB] (19 MBps) [2024-10-16T20:29:55.490Z] Copying: 606/1024 [MB] (15 MBps) [2024-10-16T20:29:56.433Z] Copying: 620/1024 [MB] (13 MBps) [2024-10-16T20:29:57.376Z] Copying: 636/1024 [MB] (16 MBps) [2024-10-16T20:29:58.316Z] Copying: 654/1024 [MB] (17 MBps) [2024-10-16T20:29:59.705Z] Copying: 669/1024 [MB] (14 MBps) [2024-10-16T20:30:00.645Z] Copying: 683/1024 [MB] (14 MBps) [2024-10-16T20:30:01.579Z] Copying: 698/1024 [MB] (15 MBps) [2024-10-16T20:30:02.510Z] Copying: 733/1024 [MB] (34 MBps) [2024-10-16T20:30:03.442Z] Copying: 768/1024 [MB] (34 MBps) [2024-10-16T20:30:04.376Z] Copying: 802/1024 [MB] (34 MBps) [2024-10-16T20:30:05.310Z] Copying: 835/1024 [MB] (33 MBps) [2024-10-16T20:30:06.681Z] Copying: 856/1024 [MB] (21 MBps) [2024-10-16T20:30:07.642Z] Copying: 885/1024 [MB] (28 MBps) [2024-10-16T20:30:08.576Z] Copying: 920/1024 [MB] (35 MBps) [2024-10-16T20:30:09.507Z] Copying: 947/1024 [MB] (27 MBps) [2024-10-16T20:30:10.442Z] Copying: 969/1024 [MB] (22 MBps) [2024-10-16T20:30:11.377Z] Copying: 992/1024 [MB] (22 MBps) [2024-10-16T20:30:11.635Z] Copying: 1017/1024 [MB] (25 MBps) [2024-10-16T20:30:12.570Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:22:57.641 00:22:57.641 20:30:12 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:57.641 20:30:12 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:57.641 20:30:12 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:57.903 [2024-10-16 20:30:12.627912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.903 [2024-10-16 20:30:12.627954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:57.903 [2024-10-16 20:30:12.627964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:57.903 [2024-10-16 20:30:12.627972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.903 [2024-10-16 20:30:12.627990] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:57.903 [2024-10-16 20:30:12.630059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.903 [2024-10-16 20:30:12.630182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:57.903 [2024-10-16 20:30:12.630199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:22:57.903 [2024-10-16 20:30:12.630205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.903 [2024-10-16 20:30:12.631982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.903 [2024-10-16 20:30:12.632013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:57.903 [2024-10-16 20:30:12.632022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:22:57.903 [2024-10-16 20:30:12.632027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.903 [2024-10-16 20:30:12.645775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.903 [2024-10-16 20:30:12.645878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:57.903 [2024-10-16 20:30:12.645894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.730 ms 00:22:57.903 [2024-10-16 20:30:12.645900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.903 [2024-10-16 20:30:12.650516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.903 [2024-10-16 20:30:12.650539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:57.903 [2024-10-16 20:30:12.650551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.586 ms 00:22:57.903 [2024-10-16 20:30:12.650557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.903 [2024-10-16 20:30:12.669316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.903 [2024-10-16 20:30:12.669343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:57.903 [2024-10-16 20:30:12.669352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.700 ms 00:22:57.903 [2024-10-16 20:30:12.669358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.903 [2024-10-16 20:30:12.682068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.903 [2024-10-16 20:30:12.682094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:57.903 [2024-10-16 20:30:12.682105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.678 ms 00:22:57.903 [2024-10-16 20:30:12.682112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.903 [2024-10-16 20:30:12.682219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.903 [2024-10-16 20:30:12.682227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:57.903 [2024-10-16 20:30:12.682235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:57.904 [2024-10-16 20:30:12.682240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.904 [2024-10-16 20:30:12.700319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.904 [2024-10-16 20:30:12.700427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:57.904 [2024-10-16 20:30:12.700443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.062 ms 00:22:57.904 [2024-10-16 20:30:12.700448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.904 [2024-10-16 20:30:12.718181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.904 [2024-10-16 20:30:12.718206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:57.904 [2024-10-16 20:30:12.718215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.704 ms 00:22:57.904 [2024-10-16 20:30:12.718220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.904 [2024-10-16 20:30:12.735434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.904 [2024-10-16 20:30:12.735527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:57.904 [2024-10-16 20:30:12.735541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.183 ms 00:22:57.904 [2024-10-16 20:30:12.735546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.904 [2024-10-16 20:30:12.753113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.904 [2024-10-16 20:30:12.753137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:57.904 [2024-10-16 20:30:12.753146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.512 ms 00:22:57.904 [2024-10-16 20:30:12.753151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.904 [2024-10-16 20:30:12.753181] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:57.904 [2024-10-16 20:30:12.753191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:57.904 [2024-10-16 20:30:12.753669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:57.905 [2024-10-16 20:30:12.753860] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:57.905 [2024-10-16 20:30:12.753869] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86454fe5-dc7e-4923-91b2-585fdda5c058 00:22:57.905 [2024-10-16 20:30:12.753875] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:57.905 [2024-10-16 20:30:12.753881] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:57.905 [2024-10-16 20:30:12.753887] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:57.905 [2024-10-16 20:30:12.753894] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:57.905 [2024-10-16 20:30:12.753899] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:57.905 [2024-10-16 20:30:12.753906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:57.905 [2024-10-16 20:30:12.753911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:57.905 [2024-10-16 20:30:12.753918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:57.905 [2024-10-16 20:30:12.753922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:57.905 [2024-10-16 20:30:12.753931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.905 [2024-10-16 20:30:12.753936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:57.905 [2024-10-16 20:30:12.753943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:22:57.905 [2024-10-16 20:30:12.753949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.905 [2024-10-16 20:30:12.763894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.905 [2024-10-16 20:30:12.763981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:57.905 [2024-10-16 20:30:12.764024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.918 ms 00:22:57.905 [2024-10-16 20:30:12.764055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.905 [2024-10-16 20:30:12.764215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.905 [2024-10-16 20:30:12.764233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:57.905 [2024-10-16 20:30:12.764277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:22:57.905 [2024-10-16 20:30:12.764295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.905 [2024-10-16 20:30:12.799048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.905 [2024-10-16 20:30:12.799141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:57.905 [2024-10-16 20:30:12.799183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.905 [2024-10-16 20:30:12.799200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.905 [2024-10-16 20:30:12.799256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.905 [2024-10-16 20:30:12.799272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:57.905 [2024-10-16 20:30:12.799288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.905 [2024-10-16 20:30:12.799304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.905 [2024-10-16 20:30:12.799364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.905 [2024-10-16 20:30:12.799443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:57.905 [2024-10-16 20:30:12.799460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.905 [2024-10-16 20:30:12.799474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.905 [2024-10-16 20:30:12.799497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.905 [2024-10-16 20:30:12.799513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:57.905 [2024-10-16 20:30:12.799529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.905 [2024-10-16 20:30:12.799579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.167 [2024-10-16 20:30:12.857446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.167 [2024-10-16 20:30:12.857569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.167 [2024-10-16 20:30:12.857613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.167 [2024-10-16 20:30:12.857631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.167 [2024-10-16 20:30:12.879956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.167 [2024-10-16 20:30:12.880059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.167 [2024-10-16 20:30:12.880102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.167 [2024-10-16 20:30:12.880121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.167 [2024-10-16 20:30:12.880179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.167 [2024-10-16 20:30:12.880197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:58.167 [2024-10-16 20:30:12.880214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.167 [2024-10-16 20:30:12.880228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.167 [2024-10-16 20:30:12.880271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.167 [2024-10-16 20:30:12.880288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:58.167 [2024-10-16 20:30:12.880355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.167 [2024-10-16 20:30:12.880373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.167 [2024-10-16 20:30:12.880457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.167 [2024-10-16 20:30:12.880508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:58.167 [2024-10-16 20:30:12.880528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.167 [2024-10-16 20:30:12.880542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.167 [2024-10-16 20:30:12.880621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.167 [2024-10-16 20:30:12.880641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:58.167 [2024-10-16 20:30:12.880657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.167 [2024-10-16 20:30:12.880672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.167 [2024-10-16 20:30:12.880742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.167 [2024-10-16 20:30:12.880761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:58.167 [2024-10-16 20:30:12.880786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.167 [2024-10-16 20:30:12.880801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.167 [2024-10-16 20:30:12.880846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.167 [2024-10-16 20:30:12.880952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:58.167 [2024-10-16 20:30:12.880968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.167 [2024-10-16 20:30:12.880982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.167 [2024-10-16 20:30:12.881105] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 253.159 ms, result 0 00:22:58.167 true 00:22:58.167 20:30:12 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75638 00:22:58.167 20:30:12 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75638 00:22:58.167 20:30:12 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:58.167 [2024-10-16 20:30:12.948159] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:58.167 [2024-10-16 20:30:12.948244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76469 ] 00:22:58.167 [2024-10-16 20:30:13.087251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.428 [2024-10-16 20:30:13.227319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.814  [2024-10-16T20:30:15.687Z] Copying: 258/1024 [MB] (258 MBps) [2024-10-16T20:30:16.630Z] Copying: 518/1024 [MB] (260 MBps) [2024-10-16T20:30:17.572Z] Copying: 776/1024 [MB] (257 MBps) [2024-10-16T20:30:18.144Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:23:03.215 00:23:03.215 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75638 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:03.215 20:30:18 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:03.215 [2024-10-16 20:30:18.075738] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:03.215 [2024-10-16 20:30:18.075867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76526 ] 00:23:03.477 [2024-10-16 20:30:18.221935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.738 [2024-10-16 20:30:18.423754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.999 [2024-10-16 20:30:18.712188] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:03.999 [2024-10-16 20:30:18.712267] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:03.999 [2024-10-16 20:30:18.774942] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:03.999 [2024-10-16 20:30:18.775474] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:03.999 [2024-10-16 20:30:18.776024] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:04.259 [2024-10-16 20:30:19.177813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.259 [2024-10-16 20:30:19.177871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:04.259 [2024-10-16 20:30:19.177887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:04.259 [2024-10-16 20:30:19.177895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.259 [2024-10-16 20:30:19.177951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.259 [2024-10-16 20:30:19.177962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:04.259 [2024-10-16 20:30:19.177974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:04.259 [2024-10-16 20:30:19.177982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.259 [2024-10-16 20:30:19.178002] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:04.259 [2024-10-16 20:30:19.178820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:04.259 [2024-10-16 20:30:19.178849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.259 [2024-10-16 20:30:19.178858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:04.259 [2024-10-16 20:30:19.178867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:23:04.259 [2024-10-16 20:30:19.178876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.259 [2024-10-16 20:30:19.180543] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:04.522 [2024-10-16 20:30:19.194905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.522 [2024-10-16 20:30:19.194954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:04.522 [2024-10-16 20:30:19.194967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.366 ms 00:23:04.522 [2024-10-16 20:30:19.194975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.522 [2024-10-16 20:30:19.195069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.522 [2024-10-16 20:30:19.195083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:04.522 [2024-10-16 20:30:19.195093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:04.522 [2024-10-16 20:30:19.195101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.522 [2024-10-16 20:30:19.203324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.522 [2024-10-16 20:30:19.203367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.522 [2024-10-16 20:30:19.203378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.147 ms 00:23:04.522 [2024-10-16 20:30:19.203385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.522 [2024-10-16 20:30:19.203484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.522 [2024-10-16 20:30:19.203494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.522 [2024-10-16 20:30:19.203503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:04.522 [2024-10-16 20:30:19.203511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.522 [2024-10-16 20:30:19.203553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.522 [2024-10-16 20:30:19.203562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:04.522 [2024-10-16 20:30:19.203571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:04.522 [2024-10-16 20:30:19.203578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.522 [2024-10-16 20:30:19.203607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:04.522 [2024-10-16 20:30:19.207861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.522 [2024-10-16 20:30:19.207901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.522 [2024-10-16 20:30:19.207912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.265 ms 00:23:04.522 [2024-10-16 20:30:19.207920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.522 [2024-10-16 20:30:19.207960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.522 [2024-10-16 20:30:19.207968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:04.522 [2024-10-16 20:30:19.207976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:04.522 [2024-10-16 20:30:19.207983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.522 [2024-10-16 20:30:19.208033] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:04.522 [2024-10-16 20:30:19.208074] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:04.522 [2024-10-16 20:30:19.208111] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:04.522 [2024-10-16 20:30:19.208129] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:04.522 [2024-10-16 20:30:19.208204] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:04.522 [2024-10-16 20:30:19.208215] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:04.522 [2024-10-16 20:30:19.208225] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:04.522 [2024-10-16 20:30:19.208235] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:04.522 [2024-10-16 20:30:19.208244] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:04.522 [2024-10-16 20:30:19.208253] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:04.522 [2024-10-16 20:30:19.208261] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:04.522 [2024-10-16 20:30:19.208268] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:04.522 [2024-10-16 20:30:19.208276] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:04.522 [2024-10-16 20:30:19.208288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.522 [2024-10-16 20:30:19.208296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:04.522 [2024-10-16 20:30:19.208304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:23:04.522 [2024-10-16 20:30:19.208311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.522 [2024-10-16 20:30:19.208371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.522 [2024-10-16 20:30:19.208380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:04.522 [2024-10-16 20:30:19.208387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:04.522 [2024-10-16 20:30:19.208394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.523 [2024-10-16 20:30:19.208465] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:04.523 [2024-10-16 20:30:19.208476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:04.523 [2024-10-16 20:30:19.208487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.523 [2024-10-16 20:30:19.208495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:04.523 [2024-10-16 20:30:19.208511] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:04.523 [2024-10-16 20:30:19.208524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:04.523 [2024-10-16 20:30:19.208530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.523 [2024-10-16 20:30:19.208544] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:04.523 [2024-10-16 20:30:19.208550] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:04.523 [2024-10-16 20:30:19.208565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.523 [2024-10-16 20:30:19.208571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:04.523 [2024-10-16 20:30:19.208579] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:04.523 [2024-10-16 20:30:19.208587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208593] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:04.523 [2024-10-16 20:30:19.208599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:04.523 [2024-10-16 20:30:19.208606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:04.523 [2024-10-16 20:30:19.208619] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:04.523 [2024-10-16 20:30:19.208625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:04.523 [2024-10-16 20:30:19.208632] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:04.523 [2024-10-16 20:30:19.208638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:04.523 [2024-10-16 20:30:19.208651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:04.523 [2024-10-16 20:30:19.208657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:04.523 [2024-10-16 20:30:19.208670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:04.523 [2024-10-16 20:30:19.208676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:04.523 [2024-10-16 20:30:19.208689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:04.523 [2024-10-16 20:30:19.208696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:04.523 [2024-10-16 20:30:19.208708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:04.523 [2024-10-16 20:30:19.208714] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.523 [2024-10-16 20:30:19.208726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:04.523 [2024-10-16 20:30:19.208732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:04.523 [2024-10-16 20:30:19.208738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.523 [2024-10-16 20:30:19.208743] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:04.523 [2024-10-16 20:30:19.208752] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:04.523 [2024-10-16 20:30:19.208784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.523 [2024-10-16 20:30:19.208792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.523 [2024-10-16 20:30:19.208800] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:04.523 [2024-10-16 20:30:19.208806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:04.523 [2024-10-16 20:30:19.208815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:04.523 [2024-10-16 20:30:19.208822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:04.523 [2024-10-16 20:30:19.208829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:04.523 [2024-10-16 20:30:19.208837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:04.523 [2024-10-16 20:30:19.208845] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:04.523 [2024-10-16 20:30:19.208854] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.523 [2024-10-16 20:30:19.208863] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:04.523 [2024-10-16 20:30:19.208870] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:04.523 [2024-10-16 20:30:19.208877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:04.523 [2024-10-16 20:30:19.208884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:04.523 [2024-10-16 20:30:19.208891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:04.523 [2024-10-16 20:30:19.208898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:04.523 [2024-10-16 20:30:19.208905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:04.523 [2024-10-16 20:30:19.208913] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:04.523 [2024-10-16 20:30:19.208920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:04.523 [2024-10-16 20:30:19.208926] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:04.523 [2024-10-16 20:30:19.208933] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:04.523 [2024-10-16 20:30:19.208941] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:04.523 [2024-10-16 20:30:19.208949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:04.523 [2024-10-16 20:30:19.208955] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:04.523 [2024-10-16 20:30:19.208965] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.523 [2024-10-16 20:30:19.208976] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:04.523 [2024-10-16 20:30:19.208983] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:04.523 [2024-10-16 20:30:19.208991] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:04.523 [2024-10-16 20:30:19.208999] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:04.523 [2024-10-16 20:30:19.209006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.523 [2024-10-16 20:30:19.209014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:04.523 [2024-10-16 20:30:19.209022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:23:04.523 [2024-10-16 20:30:19.209031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.523 [2024-10-16 20:30:19.227463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.523 [2024-10-16 20:30:19.227512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:04.523 [2024-10-16 20:30:19.227524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.376 ms 00:23:04.523 [2024-10-16 20:30:19.227533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.523 [2024-10-16 20:30:19.227626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.523 [2024-10-16 20:30:19.227635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:04.523 [2024-10-16 20:30:19.227643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:04.523 [2024-10-16 20:30:19.227651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.523 [2024-10-16 20:30:19.275441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.523 [2024-10-16 20:30:19.275495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:04.523 [2024-10-16 20:30:19.275508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.736 ms 00:23:04.523 [2024-10-16 20:30:19.275517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.523 [2024-10-16 20:30:19.275567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.523 [2024-10-16 20:30:19.275577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:04.523 [2024-10-16 20:30:19.275586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:04.523 [2024-10-16 20:30:19.275598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.523 [2024-10-16 20:30:19.276187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.523 [2024-10-16 20:30:19.276211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:04.523 [2024-10-16 20:30:19.276221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:23:04.523 [2024-10-16 20:30:19.276230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.276362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.276373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:04.524 [2024-10-16 20:30:19.276382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:23:04.524 [2024-10-16 20:30:19.276389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.293251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.293294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:04.524 [2024-10-16 20:30:19.293305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.836 ms 00:23:04.524 [2024-10-16 20:30:19.293315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.307933] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:04.524 [2024-10-16 20:30:19.307980] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:04.524 [2024-10-16 20:30:19.307993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.308002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:04.524 [2024-10-16 20:30:19.308012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.563 ms 00:23:04.524 [2024-10-16 20:30:19.308019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.333562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.333594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:04.524 [2024-10-16 20:30:19.333608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.471 ms 00:23:04.524 [2024-10-16 20:30:19.333615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.345584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.345707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:04.524 [2024-10-16 20:30:19.345723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.931 ms 00:23:04.524 [2024-10-16 20:30:19.345738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.357771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.357874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:04.524 [2024-10-16 20:30:19.357928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.004 ms 00:23:04.524 [2024-10-16 20:30:19.357950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.358609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.358720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:04.524 [2024-10-16 20:30:19.358773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:23:04.524 [2024-10-16 20:30:19.358796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.418590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.418727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:04.524 [2024-10-16 20:30:19.418784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.476 ms 00:23:04.524 [2024-10-16 20:30:19.418807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.438729] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:04.524 [2024-10-16 20:30:19.441332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.441434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:04.524 [2024-10-16 20:30:19.441483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.940 ms 00:23:04.524 [2024-10-16 20:30:19.441506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.441611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.441640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:04.524 [2024-10-16 20:30:19.441661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:04.524 [2024-10-16 20:30:19.441723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.441810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.441861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:04.524 [2024-10-16 20:30:19.441885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:04.524 [2024-10-16 20:30:19.442297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.443547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.443647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:04.524 [2024-10-16 20:30:19.443696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:23:04.524 [2024-10-16 20:30:19.443724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.443793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.443818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:04.524 [2024-10-16 20:30:19.443841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:04.524 [2024-10-16 20:30:19.443860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.524 [2024-10-16 20:30:19.443901] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:04.524 [2024-10-16 20:30:19.443980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.524 [2024-10-16 20:30:19.444003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:04.524 [2024-10-16 20:30:19.444023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:04.524 [2024-10-16 20:30:19.444072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.785 [2024-10-16 20:30:19.468023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.785 [2024-10-16 20:30:19.468145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:04.785 [2024-10-16 20:30:19.468195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.914 ms 00:23:04.785 [2024-10-16 20:30:19.468218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.785 [2024-10-16 20:30:19.468619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.785 [2024-10-16 20:30:19.468733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:04.785 [2024-10-16 20:30:19.468795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:04.785 [2024-10-16 20:30:19.468821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.785 [2024-10-16 20:30:19.469827] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.621 ms, result 0 00:23:05.729  [2024-10-16T20:30:21.603Z] Copying: 18/1024 [MB] (18 MBps) [2024-10-16T20:30:22.546Z] Copying: 32/1024 [MB] (14 MBps) [2024-10-16T20:30:23.491Z] Copying: 44/1024 [MB] (11 MBps) [2024-10-16T20:30:24.878Z] Copying: 55/1024 [MB] (11 MBps) [2024-10-16T20:30:25.821Z] Copying: 78/1024 [MB] (22 MBps) [2024-10-16T20:30:26.808Z] Copying: 95/1024 [MB] (17 MBps) [2024-10-16T20:30:27.757Z] Copying: 116/1024 [MB] (20 MBps) [2024-10-16T20:30:28.700Z] Copying: 139/1024 [MB] (22 MBps) [2024-10-16T20:30:29.644Z] Copying: 153/1024 [MB] (14 MBps) [2024-10-16T20:30:30.588Z] Copying: 180/1024 [MB] (26 MBps) [2024-10-16T20:30:31.532Z] Copying: 202/1024 [MB] (21 MBps) [2024-10-16T20:30:32.919Z] Copying: 218/1024 [MB] (16 MBps) [2024-10-16T20:30:33.491Z] Copying: 246/1024 [MB] (27 MBps) [2024-10-16T20:30:34.878Z] Copying: 264/1024 [MB] (17 MBps) [2024-10-16T20:30:35.822Z] Copying: 274/1024 [MB] (10 MBps) [2024-10-16T20:30:36.765Z] Copying: 284/1024 [MB] (10 MBps) [2024-10-16T20:30:37.708Z] Copying: 307/1024 [MB] (22 MBps) [2024-10-16T20:30:38.650Z] Copying: 327/1024 [MB] (20 MBps) [2024-10-16T20:30:39.594Z] Copying: 343/1024 [MB] (16 MBps) [2024-10-16T20:30:40.536Z] Copying: 357/1024 [MB] (13 MBps) [2024-10-16T20:30:41.921Z] Copying: 375/1024 [MB] (18 MBps) [2024-10-16T20:30:42.495Z] Copying: 393/1024 [MB] (18 MBps) [2024-10-16T20:30:43.880Z] Copying: 411/1024 [MB] (17 MBps) [2024-10-16T20:30:44.824Z] Copying: 428/1024 [MB] (16 MBps) [2024-10-16T20:30:45.767Z] Copying: 444/1024 [MB] (16 MBps) [2024-10-16T20:30:46.711Z] Copying: 454/1024 [MB] (10 MBps) [2024-10-16T20:30:47.699Z] Copying: 465/1024 [MB] (10 MBps) [2024-10-16T20:30:48.643Z] Copying: 475/1024 [MB] (10 MBps) [2024-10-16T20:30:49.587Z] Copying: 495/1024 [MB] (19 MBps) [2024-10-16T20:30:50.531Z] Copying: 509/1024 [MB] (14 MBps) [2024-10-16T20:30:51.918Z] Copying: 527/1024 [MB] (17 MBps) [2024-10-16T20:30:52.492Z] Copying: 556/1024 [MB] (28 MBps) [2024-10-16T20:30:53.880Z] Copying: 579/1024 [MB] (23 MBps) [2024-10-16T20:30:54.825Z] Copying: 589/1024 [MB] (10 MBps) [2024-10-16T20:30:55.770Z] Copying: 600/1024 [MB] (10 MBps) [2024-10-16T20:30:56.714Z] Copying: 610/1024 [MB] (10 MBps) [2024-10-16T20:30:57.657Z] Copying: 621/1024 [MB] (11 MBps) [2024-10-16T20:30:58.598Z] Copying: 634/1024 [MB] (12 MBps) [2024-10-16T20:30:59.540Z] Copying: 648/1024 [MB] (14 MBps) [2024-10-16T20:31:00.925Z] Copying: 677/1024 [MB] (29 MBps) [2024-10-16T20:31:01.497Z] Copying: 701/1024 [MB] (23 MBps) [2024-10-16T20:31:02.884Z] Copying: 714/1024 [MB] (13 MBps) [2024-10-16T20:31:03.827Z] Copying: 735/1024 [MB] (21 MBps) [2024-10-16T20:31:04.769Z] Copying: 756/1024 [MB] (21 MBps) [2024-10-16T20:31:05.708Z] Copying: 785/1024 [MB] (28 MBps) [2024-10-16T20:31:06.651Z] Copying: 812/1024 [MB] (26 MBps) [2024-10-16T20:31:07.631Z] Copying: 822/1024 [MB] (10 MBps) [2024-10-16T20:31:08.576Z] Copying: 833/1024 [MB] (10 MBps) [2024-10-16T20:31:09.520Z] Copying: 850/1024 [MB] (17 MBps) [2024-10-16T20:31:10.909Z] Copying: 866/1024 [MB] (16 MBps) [2024-10-16T20:31:11.852Z] Copying: 877/1024 [MB] (10 MBps) [2024-10-16T20:31:12.795Z] Copying: 895/1024 [MB] (17 MBps) [2024-10-16T20:31:13.738Z] Copying: 905/1024 [MB] (10 MBps) [2024-10-16T20:31:14.682Z] Copying: 916/1024 [MB] (10 MBps) [2024-10-16T20:31:15.625Z] Copying: 926/1024 [MB] (10 MBps) [2024-10-16T20:31:16.570Z] Copying: 939/1024 [MB] (13 MBps) [2024-10-16T20:31:17.516Z] Copying: 959/1024 [MB] (20 MBps) [2024-10-16T20:31:18.902Z] Copying: 976/1024 [MB] (16 MBps) [2024-10-16T20:31:19.848Z] Copying: 989/1024 [MB] (13 MBps) [2024-10-16T20:31:20.792Z] Copying: 1009/1024 [MB] (20 MBps) [2024-10-16T20:31:21.736Z] Copying: 1020/1024 [MB] (10 MBps) [2024-10-16T20:31:21.736Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-16 20:31:21.452684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.807 [2024-10-16 20:31:21.452846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:06.807 [2024-10-16 20:31:21.452867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:06.807 [2024-10-16 20:31:21.452875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.807 [2024-10-16 20:31:21.454397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:06.807 [2024-10-16 20:31:21.457646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.807 [2024-10-16 20:31:21.457689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:06.807 [2024-10-16 20:31:21.457707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:24:06.807 [2024-10-16 20:31:21.457714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.807 [2024-10-16 20:31:21.470391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.807 [2024-10-16 20:31:21.470434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:06.807 [2024-10-16 20:31:21.470446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.992 ms 00:24:06.807 [2024-10-16 20:31:21.470454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.807 [2024-10-16 20:31:21.492807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.807 [2024-10-16 20:31:21.492840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:06.807 [2024-10-16 20:31:21.492851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.337 ms 00:24:06.807 [2024-10-16 20:31:21.492858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.807 [2024-10-16 20:31:21.498962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.807 [2024-10-16 20:31:21.498989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:06.807 [2024-10-16 20:31:21.499000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.070 ms 00:24:06.807 [2024-10-16 20:31:21.499008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.807 [2024-10-16 20:31:21.524059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.807 [2024-10-16 20:31:21.524103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:06.807 [2024-10-16 20:31:21.524114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.992 ms 00:24:06.807 [2024-10-16 20:31:21.524122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.807 [2024-10-16 20:31:21.538981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.807 [2024-10-16 20:31:21.539160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:06.807 [2024-10-16 20:31:21.539181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.824 ms 00:24:06.807 [2024-10-16 20:31:21.539189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.807 [2024-10-16 20:31:21.723543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.807 [2024-10-16 20:31:21.723602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:06.807 [2024-10-16 20:31:21.723615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 184.260 ms 00:24:06.807 [2024-10-16 20:31:21.723623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.070 [2024-10-16 20:31:21.747814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.070 [2024-10-16 20:31:21.747852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:07.070 [2024-10-16 20:31:21.747863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.171 ms 00:24:07.070 [2024-10-16 20:31:21.747870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.070 [2024-10-16 20:31:21.771092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.070 [2024-10-16 20:31:21.771121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:07.070 [2024-10-16 20:31:21.771132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.188 ms 00:24:07.070 [2024-10-16 20:31:21.771139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.070 [2024-10-16 20:31:21.793734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.070 [2024-10-16 20:31:21.793867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:07.070 [2024-10-16 20:31:21.793883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.564 ms 00:24:07.070 [2024-10-16 20:31:21.793890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.070 [2024-10-16 20:31:21.816571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.070 [2024-10-16 20:31:21.816687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:07.070 [2024-10-16 20:31:21.816741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.624 ms 00:24:07.070 [2024-10-16 20:31:21.816762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.070 [2024-10-16 20:31:21.817017] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:07.070 [2024-10-16 20:31:21.817105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100352 / 261120 wr_cnt: 1 state: open 00:24:07.070 [2024-10-16 20:31:21.817197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.817979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.818800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.819380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.819425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.819455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.819858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.819901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.819930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.820000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.820034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:07.070 [2024-10-16 20:31:21.820076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.820998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:07.071 [2024-10-16 20:31:21.821161] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:07.071 [2024-10-16 20:31:21.821172] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86454fe5-dc7e-4923-91b2-585fdda5c058 00:24:07.071 [2024-10-16 20:31:21.821180] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100352 00:24:07.071 [2024-10-16 20:31:21.821187] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101312 00:24:07.071 [2024-10-16 20:31:21.821195] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100352 00:24:07.071 [2024-10-16 20:31:21.821208] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:24:07.071 [2024-10-16 20:31:21.821215] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:07.071 [2024-10-16 20:31:21.821222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:07.071 [2024-10-16 20:31:21.821229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:07.071 [2024-10-16 20:31:21.821235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:07.071 [2024-10-16 20:31:21.821242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:07.071 [2024-10-16 20:31:21.821250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.071 [2024-10-16 20:31:21.821258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:07.071 [2024-10-16 20:31:21.821266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.237 ms 00:24:07.071 [2024-10-16 20:31:21.821273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.071 [2024-10-16 20:31:21.833929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.071 [2024-10-16 20:31:21.834031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:07.071 [2024-10-16 20:31:21.834098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.613 ms 00:24:07.071 [2024-10-16 20:31:21.834120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.071 [2024-10-16 20:31:21.834340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.071 [2024-10-16 20:31:21.834444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:07.071 [2024-10-16 20:31:21.834500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:24:07.071 [2024-10-16 20:31:21.834521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.071 [2024-10-16 20:31:21.870098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.071 [2024-10-16 20:31:21.870208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.071 [2024-10-16 20:31:21.870256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.071 [2024-10-16 20:31:21.870278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.071 [2024-10-16 20:31:21.870342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.071 [2024-10-16 20:31:21.870362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.071 [2024-10-16 20:31:21.870386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.071 [2024-10-16 20:31:21.870404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.071 [2024-10-16 20:31:21.870479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.071 [2024-10-16 20:31:21.870546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.071 [2024-10-16 20:31:21.870569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.071 [2024-10-16 20:31:21.870587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.071 [2024-10-16 20:31:21.870614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.071 [2024-10-16 20:31:21.870634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.071 [2024-10-16 20:31:21.870653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.071 [2024-10-16 20:31:21.870676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.071 [2024-10-16 20:31:21.947493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.071 [2024-10-16 20:31:21.947722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.071 [2024-10-16 20:31:21.947794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.071 [2024-10-16 20:31:21.947818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.071 [2024-10-16 20:31:21.979782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.071 [2024-10-16 20:31:21.979941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.071 [2024-10-16 20:31:21.980007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.071 [2024-10-16 20:31:21.980031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.071 [2024-10-16 20:31:21.980142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.071 [2024-10-16 20:31:21.980169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.071 [2024-10-16 20:31:21.980190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.071 [2024-10-16 20:31:21.980215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.071 [2024-10-16 20:31:21.980270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.071 [2024-10-16 20:31:21.980293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.072 [2024-10-16 20:31:21.980313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.072 [2024-10-16 20:31:21.980437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.072 [2024-10-16 20:31:21.980577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.072 [2024-10-16 20:31:21.980665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.072 [2024-10-16 20:31:21.980691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.072 [2024-10-16 20:31:21.980739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.072 [2024-10-16 20:31:21.980797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.072 [2024-10-16 20:31:21.980821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:07.072 [2024-10-16 20:31:21.980840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.072 [2024-10-16 20:31:21.980859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.072 [2024-10-16 20:31:21.980919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.072 [2024-10-16 20:31:21.980942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.072 [2024-10-16 20:31:21.980961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.072 [2024-10-16 20:31:21.980981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.072 [2024-10-16 20:31:21.981057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.072 [2024-10-16 20:31:21.981138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.072 [2024-10-16 20:31:21.981162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.072 [2024-10-16 20:31:21.981181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.072 [2024-10-16 20:31:21.981352] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 530.710 ms, result 0 00:24:08.987 00:24:08.987 00:24:08.987 20:31:23 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:10.900 20:31:25 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:10.900 [2024-10-16 20:31:25.536062] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:10.900 [2024-10-16 20:31:25.536182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77223 ] 00:24:10.900 [2024-10-16 20:31:25.686756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.161 [2024-10-16 20:31:25.903150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.421 [2024-10-16 20:31:26.189117] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.421 [2024-10-16 20:31:26.189196] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.421 [2024-10-16 20:31:26.345235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.421 [2024-10-16 20:31:26.345291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:11.421 [2024-10-16 20:31:26.345306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:11.421 [2024-10-16 20:31:26.345318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.421 [2024-10-16 20:31:26.345371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.421 [2024-10-16 20:31:26.345382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.422 [2024-10-16 20:31:26.345390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:11.422 [2024-10-16 20:31:26.345398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.422 [2024-10-16 20:31:26.345419] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:11.422 [2024-10-16 20:31:26.346206] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:11.422 [2024-10-16 20:31:26.346227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.422 [2024-10-16 20:31:26.346235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.422 [2024-10-16 20:31:26.346244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:24:11.422 [2024-10-16 20:31:26.346252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.422 [2024-10-16 20:31:26.347922] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:11.683 [2024-10-16 20:31:26.362367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.683 [2024-10-16 20:31:26.362411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:11.683 [2024-10-16 20:31:26.362424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.446 ms 00:24:11.683 [2024-10-16 20:31:26.362433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.683 [2024-10-16 20:31:26.362503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.683 [2024-10-16 20:31:26.362513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:11.683 [2024-10-16 20:31:26.362522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:11.683 [2024-10-16 20:31:26.362529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.683 [2024-10-16 20:31:26.370854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.683 [2024-10-16 20:31:26.371026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.683 [2024-10-16 20:31:26.371066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.249 ms 00:24:11.683 [2024-10-16 20:31:26.371076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.683 [2024-10-16 20:31:26.371173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.683 [2024-10-16 20:31:26.371183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.683 [2024-10-16 20:31:26.371192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:11.683 [2024-10-16 20:31:26.371200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.683 [2024-10-16 20:31:26.371246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.683 [2024-10-16 20:31:26.371256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:11.683 [2024-10-16 20:31:26.371265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:11.683 [2024-10-16 20:31:26.371272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.683 [2024-10-16 20:31:26.371304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:11.683 [2024-10-16 20:31:26.375406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.683 [2024-10-16 20:31:26.375444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.683 [2024-10-16 20:31:26.375455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.116 ms 00:24:11.683 [2024-10-16 20:31:26.375463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.684 [2024-10-16 20:31:26.375500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.684 [2024-10-16 20:31:26.375508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:11.684 [2024-10-16 20:31:26.375517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:11.684 [2024-10-16 20:31:26.375527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.684 [2024-10-16 20:31:26.375575] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:11.684 [2024-10-16 20:31:26.375597] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:11.684 [2024-10-16 20:31:26.375633] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:11.684 [2024-10-16 20:31:26.375649] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:11.684 [2024-10-16 20:31:26.375725] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:11.684 [2024-10-16 20:31:26.375737] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:11.684 [2024-10-16 20:31:26.375749] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:11.684 [2024-10-16 20:31:26.375760] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:11.684 [2024-10-16 20:31:26.375770] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:11.684 [2024-10-16 20:31:26.375778] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:11.684 [2024-10-16 20:31:26.375786] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:11.684 [2024-10-16 20:31:26.375794] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:11.684 [2024-10-16 20:31:26.375802] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:11.684 [2024-10-16 20:31:26.375811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.684 [2024-10-16 20:31:26.375818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:11.684 [2024-10-16 20:31:26.375826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:24:11.684 [2024-10-16 20:31:26.375835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.684 [2024-10-16 20:31:26.375897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.684 [2024-10-16 20:31:26.375905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:11.684 [2024-10-16 20:31:26.375913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:11.684 [2024-10-16 20:31:26.375919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.684 [2024-10-16 20:31:26.375990] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:11.684 [2024-10-16 20:31:26.376000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:11.684 [2024-10-16 20:31:26.376008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.684 [2024-10-16 20:31:26.376016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376024] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:11.684 [2024-10-16 20:31:26.376030] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:11.684 [2024-10-16 20:31:26.376067] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:11.684 [2024-10-16 20:31:26.376074] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.684 [2024-10-16 20:31:26.376089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:11.684 [2024-10-16 20:31:26.376096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:11.684 [2024-10-16 20:31:26.376102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.684 [2024-10-16 20:31:26.376109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:11.684 [2024-10-16 20:31:26.376116] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:11.684 [2024-10-16 20:31:26.376124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:11.684 [2024-10-16 20:31:26.376146] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:11.684 [2024-10-16 20:31:26.376153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:11.684 [2024-10-16 20:31:26.376167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:11.684 [2024-10-16 20:31:26.376174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:11.684 [2024-10-16 20:31:26.376182] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:11.684 [2024-10-16 20:31:26.376189] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:11.684 [2024-10-16 20:31:26.376202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:11.684 [2024-10-16 20:31:26.376209] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:11.684 [2024-10-16 20:31:26.376223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:11.684 [2024-10-16 20:31:26.376229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:11.684 [2024-10-16 20:31:26.376242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:11.684 [2024-10-16 20:31:26.376248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:11.684 [2024-10-16 20:31:26.376261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:11.684 [2024-10-16 20:31:26.376268] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.684 [2024-10-16 20:31:26.376282] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:11.684 [2024-10-16 20:31:26.376289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:11.684 [2024-10-16 20:31:26.376295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.684 [2024-10-16 20:31:26.376301] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:11.684 [2024-10-16 20:31:26.376311] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:11.684 [2024-10-16 20:31:26.376318] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.684 [2024-10-16 20:31:26.376328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.684 [2024-10-16 20:31:26.376336] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:11.684 [2024-10-16 20:31:26.376343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:11.684 [2024-10-16 20:31:26.376350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:11.684 [2024-10-16 20:31:26.376358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:11.684 [2024-10-16 20:31:26.376372] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:11.684 [2024-10-16 20:31:26.376378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:11.684 [2024-10-16 20:31:26.376386] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:11.684 [2024-10-16 20:31:26.376396] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.684 [2024-10-16 20:31:26.376405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:11.684 [2024-10-16 20:31:26.376413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:11.684 [2024-10-16 20:31:26.376419] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:11.684 [2024-10-16 20:31:26.376427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:11.684 [2024-10-16 20:31:26.376434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:11.684 [2024-10-16 20:31:26.376441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:11.684 [2024-10-16 20:31:26.376447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:11.684 [2024-10-16 20:31:26.376454] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:11.684 [2024-10-16 20:31:26.376462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:11.684 [2024-10-16 20:31:26.376469] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:11.684 [2024-10-16 20:31:26.376476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:11.684 [2024-10-16 20:31:26.376483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:11.684 [2024-10-16 20:31:26.376491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:11.684 [2024-10-16 20:31:26.376498] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:11.684 [2024-10-16 20:31:26.376507] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.684 [2024-10-16 20:31:26.376515] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:11.684 [2024-10-16 20:31:26.376522] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:11.684 [2024-10-16 20:31:26.376530] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:11.684 [2024-10-16 20:31:26.376537] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:11.684 [2024-10-16 20:31:26.376544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.684 [2024-10-16 20:31:26.376551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:11.684 [2024-10-16 20:31:26.376559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:24:11.684 [2024-10-16 20:31:26.376566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.684 [2024-10-16 20:31:26.394803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.684 [2024-10-16 20:31:26.394974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.684 [2024-10-16 20:31:26.394994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.196 ms 00:24:11.684 [2024-10-16 20:31:26.395009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.684 [2024-10-16 20:31:26.395126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.395136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:11.685 [2024-10-16 20:31:26.395145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:11.685 [2024-10-16 20:31:26.395153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.441511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.441696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:11.685 [2024-10-16 20:31:26.441719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.305 ms 00:24:11.685 [2024-10-16 20:31:26.441728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.441778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.441788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:11.685 [2024-10-16 20:31:26.441797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:11.685 [2024-10-16 20:31:26.441805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.442418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.442440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:11.685 [2024-10-16 20:31:26.442451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:24:11.685 [2024-10-16 20:31:26.442465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.442594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.442604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:11.685 [2024-10-16 20:31:26.442613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:24:11.685 [2024-10-16 20:31:26.442622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.459287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.459328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:11.685 [2024-10-16 20:31:26.459339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.641 ms 00:24:11.685 [2024-10-16 20:31:26.459348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.473552] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:11.685 [2024-10-16 20:31:26.473596] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:11.685 [2024-10-16 20:31:26.473608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.473616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:11.685 [2024-10-16 20:31:26.473626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.151 ms 00:24:11.685 [2024-10-16 20:31:26.473633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.499833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.499879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:11.685 [2024-10-16 20:31:26.499891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.148 ms 00:24:11.685 [2024-10-16 20:31:26.499899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.512588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.512652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:11.685 [2024-10-16 20:31:26.512664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.639 ms 00:24:11.685 [2024-10-16 20:31:26.512672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.525393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.525433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:11.685 [2024-10-16 20:31:26.525455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.671 ms 00:24:11.685 [2024-10-16 20:31:26.525463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.525849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.525861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:11.685 [2024-10-16 20:31:26.525871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:24:11.685 [2024-10-16 20:31:26.525878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.593495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.593547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:11.685 [2024-10-16 20:31:26.593563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.597 ms 00:24:11.685 [2024-10-16 20:31:26.593572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.604873] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:11.685 [2024-10-16 20:31:26.607901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.608089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:11.685 [2024-10-16 20:31:26.608109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.274 ms 00:24:11.685 [2024-10-16 20:31:26.608124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.608198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.608209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:11.685 [2024-10-16 20:31:26.608218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:11.685 [2024-10-16 20:31:26.608225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.685 [2024-10-16 20:31:26.609613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.685 [2024-10-16 20:31:26.609660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:11.685 [2024-10-16 20:31:26.609671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:24:11.685 [2024-10-16 20:31:26.609678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.946 [2024-10-16 20:31:26.611066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.946 [2024-10-16 20:31:26.611101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:11.946 [2024-10-16 20:31:26.611111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.329 ms 00:24:11.946 [2024-10-16 20:31:26.611119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.946 [2024-10-16 20:31:26.611153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.946 [2024-10-16 20:31:26.611161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:11.946 [2024-10-16 20:31:26.611175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:11.946 [2024-10-16 20:31:26.611182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.946 [2024-10-16 20:31:26.611218] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:11.946 [2024-10-16 20:31:26.611228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.946 [2024-10-16 20:31:26.611239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:11.946 [2024-10-16 20:31:26.611246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:11.946 [2024-10-16 20:31:26.611254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.946 [2024-10-16 20:31:26.637153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.946 [2024-10-16 20:31:26.637195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:11.946 [2024-10-16 20:31:26.637208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.879 ms 00:24:11.946 [2024-10-16 20:31:26.637216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.946 [2024-10-16 20:31:26.637303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.946 [2024-10-16 20:31:26.637312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:11.946 [2024-10-16 20:31:26.637322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:11.946 [2024-10-16 20:31:26.637329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.946 [2024-10-16 20:31:26.643682] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.758 ms, result 0 00:24:12.915  [2024-10-16T20:31:29.246Z] Copying: 1012/1048576 [kB] (1012 kBps) [2024-10-16T20:31:30.190Z] Copying: 3680/1048576 [kB] (2668 kBps) [2024-10-16T20:31:31.134Z] Copying: 14/1024 [MB] (11 MBps) [2024-10-16T20:31:32.076Z] Copying: 39/1024 [MB] (24 MBps) [2024-10-16T20:31:33.019Z] Copying: 80/1024 [MB] (41 MBps) [2024-10-16T20:31:33.963Z] Copying: 126/1024 [MB] (46 MBps) [2024-10-16T20:31:34.907Z] Copying: 168/1024 [MB] (41 MBps) [2024-10-16T20:31:35.849Z] Copying: 216/1024 [MB] (47 MBps) [2024-10-16T20:31:37.234Z] Copying: 256/1024 [MB] (39 MBps) [2024-10-16T20:31:38.177Z] Copying: 295/1024 [MB] (39 MBps) [2024-10-16T20:31:39.122Z] Copying: 320/1024 [MB] (24 MBps) [2024-10-16T20:31:40.066Z] Copying: 346/1024 [MB] (26 MBps) [2024-10-16T20:31:41.009Z] Copying: 380/1024 [MB] (33 MBps) [2024-10-16T20:31:41.952Z] Copying: 412/1024 [MB] (31 MBps) [2024-10-16T20:31:42.896Z] Copying: 460/1024 [MB] (47 MBps) [2024-10-16T20:31:43.837Z] Copying: 502/1024 [MB] (42 MBps) [2024-10-16T20:31:45.224Z] Copying: 538/1024 [MB] (36 MBps) [2024-10-16T20:31:46.168Z] Copying: 563/1024 [MB] (25 MBps) [2024-10-16T20:31:47.112Z] Copying: 594/1024 [MB] (31 MBps) [2024-10-16T20:31:48.078Z] Copying: 622/1024 [MB] (27 MBps) [2024-10-16T20:31:49.057Z] Copying: 664/1024 [MB] (42 MBps) [2024-10-16T20:31:50.000Z] Copying: 705/1024 [MB] (40 MBps) [2024-10-16T20:31:50.942Z] Copying: 730/1024 [MB] (25 MBps) [2024-10-16T20:31:51.882Z] Copying: 755/1024 [MB] (24 MBps) [2024-10-16T20:31:53.269Z] Copying: 783/1024 [MB] (27 MBps) [2024-10-16T20:31:53.840Z] Copying: 810/1024 [MB] (27 MBps) [2024-10-16T20:31:55.223Z] Copying: 841/1024 [MB] (30 MBps) [2024-10-16T20:31:56.162Z] Copying: 871/1024 [MB] (29 MBps) [2024-10-16T20:31:57.097Z] Copying: 902/1024 [MB] (31 MBps) [2024-10-16T20:31:58.042Z] Copying: 939/1024 [MB] (36 MBps) [2024-10-16T20:31:58.983Z] Copying: 967/1024 [MB] (27 MBps) [2024-10-16T20:31:59.924Z] Copying: 990/1024 [MB] (23 MBps) [2024-10-16T20:32:00.494Z] Copying: 1011/1024 [MB] (20 MBps) [2024-10-16T20:32:00.757Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-10-16 20:32:00.506633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.506723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:45.828 [2024-10-16 20:32:00.506742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:45.828 [2024-10-16 20:32:00.506752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.506781] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:45.828 [2024-10-16 20:32:00.510687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.510733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:45.828 [2024-10-16 20:32:00.510745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.886 ms 00:24:45.828 [2024-10-16 20:32:00.510754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.511053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.511067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:45.828 [2024-10-16 20:32:00.511078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:24:45.828 [2024-10-16 20:32:00.511086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.525681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.525731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:45.828 [2024-10-16 20:32:00.525744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.576 ms 00:24:45.828 [2024-10-16 20:32:00.525753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.531884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.531929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:45.828 [2024-10-16 20:32:00.531941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.093 ms 00:24:45.828 [2024-10-16 20:32:00.531949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.558666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.558712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:45.828 [2024-10-16 20:32:00.558724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.651 ms 00:24:45.828 [2024-10-16 20:32:00.558731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.575183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.575225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:45.828 [2024-10-16 20:32:00.575238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.407 ms 00:24:45.828 [2024-10-16 20:32:00.575247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.585183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.585227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:45.828 [2024-10-16 20:32:00.585239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.886 ms 00:24:45.828 [2024-10-16 20:32:00.585254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.611147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.611192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:45.828 [2024-10-16 20:32:00.611204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.878 ms 00:24:45.828 [2024-10-16 20:32:00.611212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.636963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.637018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:45.828 [2024-10-16 20:32:00.637031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.707 ms 00:24:45.828 [2024-10-16 20:32:00.637073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.661926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.661970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:45.828 [2024-10-16 20:32:00.661982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.809 ms 00:24:45.828 [2024-10-16 20:32:00.661989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.686945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.828 [2024-10-16 20:32:00.686989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:45.828 [2024-10-16 20:32:00.687001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.851 ms 00:24:45.828 [2024-10-16 20:32:00.687007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.828 [2024-10-16 20:32:00.687067] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:45.828 [2024-10-16 20:32:00.687084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:45.828 [2024-10-16 20:32:00.687095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:24:45.828 [2024-10-16 20:32:00.687103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:45.828 [2024-10-16 20:32:00.687111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:45.828 [2024-10-16 20:32:00.687119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:45.828 [2024-10-16 20:32:00.687127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:45.829 [2024-10-16 20:32:00.687801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:45.830 [2024-10-16 20:32:00.687809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:45.830 [2024-10-16 20:32:00.687816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:45.830 [2024-10-16 20:32:00.687824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:45.830 [2024-10-16 20:32:00.687831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:45.830 [2024-10-16 20:32:00.687839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:45.830 [2024-10-16 20:32:00.687846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:45.830 [2024-10-16 20:32:00.687854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:45.830 [2024-10-16 20:32:00.687862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:45.830 [2024-10-16 20:32:00.687877] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:45.830 [2024-10-16 20:32:00.687885] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86454fe5-dc7e-4923-91b2-585fdda5c058 00:24:45.830 [2024-10-16 20:32:00.687894] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:24:45.830 [2024-10-16 20:32:00.687907] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 166592 00:24:45.830 [2024-10-16 20:32:00.687915] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 164608 00:24:45.830 [2024-10-16 20:32:00.687924] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0121 00:24:45.830 [2024-10-16 20:32:00.687932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:45.830 [2024-10-16 20:32:00.687940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:45.830 [2024-10-16 20:32:00.687948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:45.830 [2024-10-16 20:32:00.687955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:45.830 [2024-10-16 20:32:00.687969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:45.830 [2024-10-16 20:32:00.687976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.830 [2024-10-16 20:32:00.687985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:45.830 [2024-10-16 20:32:00.687994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:24:45.830 [2024-10-16 20:32:00.688003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.830 [2024-10-16 20:32:00.701762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.830 [2024-10-16 20:32:00.701941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:45.830 [2024-10-16 20:32:00.701959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.724 ms 00:24:45.830 [2024-10-16 20:32:00.701967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.830 [2024-10-16 20:32:00.702234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.830 [2024-10-16 20:32:00.702246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:45.830 [2024-10-16 20:32:00.702255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:24:45.830 [2024-10-16 20:32:00.702269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.830 [2024-10-16 20:32:00.741165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.830 [2024-10-16 20:32:00.741212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:45.830 [2024-10-16 20:32:00.741224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.830 [2024-10-16 20:32:00.741231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.830 [2024-10-16 20:32:00.741288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.830 [2024-10-16 20:32:00.741296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:45.830 [2024-10-16 20:32:00.741305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.830 [2024-10-16 20:32:00.741313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.830 [2024-10-16 20:32:00.741398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.830 [2024-10-16 20:32:00.741409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:45.830 [2024-10-16 20:32:00.741417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.830 [2024-10-16 20:32:00.741425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.830 [2024-10-16 20:32:00.741441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.830 [2024-10-16 20:32:00.741450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:45.830 [2024-10-16 20:32:00.741458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.830 [2024-10-16 20:32:00.741466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.092 [2024-10-16 20:32:00.822664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.092 [2024-10-16 20:32:00.822718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.092 [2024-10-16 20:32:00.822729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.092 [2024-10-16 20:32:00.822738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.092 [2024-10-16 20:32:00.854828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.092 [2024-10-16 20:32:00.854874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.092 [2024-10-16 20:32:00.854886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.092 [2024-10-16 20:32:00.854894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.092 [2024-10-16 20:32:00.854964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.092 [2024-10-16 20:32:00.854973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.092 [2024-10-16 20:32:00.854983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.092 [2024-10-16 20:32:00.854991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.092 [2024-10-16 20:32:00.855036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.092 [2024-10-16 20:32:00.855078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.092 [2024-10-16 20:32:00.855088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.092 [2024-10-16 20:32:00.855096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.092 [2024-10-16 20:32:00.855201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.092 [2024-10-16 20:32:00.855215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.092 [2024-10-16 20:32:00.855224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.092 [2024-10-16 20:32:00.855232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.092 [2024-10-16 20:32:00.855263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.092 [2024-10-16 20:32:00.855273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:46.092 [2024-10-16 20:32:00.855281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.092 [2024-10-16 20:32:00.855289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.092 [2024-10-16 20:32:00.855327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.092 [2024-10-16 20:32:00.855340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.092 [2024-10-16 20:32:00.855349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.092 [2024-10-16 20:32:00.855357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.092 [2024-10-16 20:32:00.855405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.092 [2024-10-16 20:32:00.855416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.092 [2024-10-16 20:32:00.855424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.092 [2024-10-16 20:32:00.855431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.092 [2024-10-16 20:32:00.855564] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.904 ms, result 0 00:24:47.035 00:24:47.035 00:24:47.035 20:32:01 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:49.583 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:49.584 20:32:04 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:49.584 [2024-10-16 20:32:04.133294] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:49.584 [2024-10-16 20:32:04.133432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77620 ] 00:24:49.584 [2024-10-16 20:32:04.288681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.845 [2024-10-16 20:32:04.514915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.108 [2024-10-16 20:32:04.805685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.108 [2024-10-16 20:32:04.805768] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.108 [2024-10-16 20:32:04.962466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.962529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:50.108 [2024-10-16 20:32:04.962545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:50.108 [2024-10-16 20:32:04.962557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.962611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.962621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.108 [2024-10-16 20:32:04.962630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:50.108 [2024-10-16 20:32:04.962638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.962658] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:50.108 [2024-10-16 20:32:04.963463] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:50.108 [2024-10-16 20:32:04.963485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.963493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.108 [2024-10-16 20:32:04.963502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:24:50.108 [2024-10-16 20:32:04.963511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.965357] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:50.108 [2024-10-16 20:32:04.980003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.980073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:50.108 [2024-10-16 20:32:04.980089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.648 ms 00:24:50.108 [2024-10-16 20:32:04.980097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.980177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.980187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:50.108 [2024-10-16 20:32:04.980219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:50.108 [2024-10-16 20:32:04.980227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.988321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.988365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.108 [2024-10-16 20:32:04.988375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.014 ms 00:24:50.108 [2024-10-16 20:32:04.988384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.988480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.988490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.108 [2024-10-16 20:32:04.988499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:50.108 [2024-10-16 20:32:04.988507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.988582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.988592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:50.108 [2024-10-16 20:32:04.988600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:50.108 [2024-10-16 20:32:04.988608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.988640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:50.108 [2024-10-16 20:32:04.992901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.992943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.108 [2024-10-16 20:32:04.992953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.275 ms 00:24:50.108 [2024-10-16 20:32:04.992961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.992999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.993008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:50.108 [2024-10-16 20:32:04.993017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:50.108 [2024-10-16 20:32:04.993027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.993098] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:50.108 [2024-10-16 20:32:04.993122] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:50.108 [2024-10-16 20:32:04.993159] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:50.108 [2024-10-16 20:32:04.993176] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:50.108 [2024-10-16 20:32:04.993252] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:50.108 [2024-10-16 20:32:04.993263] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:50.108 [2024-10-16 20:32:04.993277] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:50.108 [2024-10-16 20:32:04.993289] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:50.108 [2024-10-16 20:32:04.993298] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:50.108 [2024-10-16 20:32:04.993306] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:50.108 [2024-10-16 20:32:04.993313] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:50.108 [2024-10-16 20:32:04.993322] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:50.108 [2024-10-16 20:32:04.993330] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:50.108 [2024-10-16 20:32:04.993339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.993346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:50.108 [2024-10-16 20:32:04.993354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:24:50.108 [2024-10-16 20:32:04.993361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.993430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.108 [2024-10-16 20:32:04.993440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:50.108 [2024-10-16 20:32:04.993448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:50.108 [2024-10-16 20:32:04.993455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.108 [2024-10-16 20:32:04.993524] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:50.108 [2024-10-16 20:32:04.993534] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:50.108 [2024-10-16 20:32:04.993542] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.108 [2024-10-16 20:32:04.993551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.108 [2024-10-16 20:32:04.993559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:50.108 [2024-10-16 20:32:04.993565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:50.108 [2024-10-16 20:32:04.993572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:50.108 [2024-10-16 20:32:04.993578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:50.108 [2024-10-16 20:32:04.993585] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:50.108 [2024-10-16 20:32:04.993592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.108 [2024-10-16 20:32:04.993599] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:50.108 [2024-10-16 20:32:04.993605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:50.108 [2024-10-16 20:32:04.993612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.108 [2024-10-16 20:32:04.993621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:50.108 [2024-10-16 20:32:04.993628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:50.108 [2024-10-16 20:32:04.993635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.108 [2024-10-16 20:32:04.993650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:50.108 [2024-10-16 20:32:04.993657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:50.108 [2024-10-16 20:32:04.993664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.108 [2024-10-16 20:32:04.993671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:50.108 [2024-10-16 20:32:04.993678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:50.108 [2024-10-16 20:32:04.993684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:50.108 [2024-10-16 20:32:04.993691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:50.108 [2024-10-16 20:32:04.993698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:50.108 [2024-10-16 20:32:04.993704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:50.108 [2024-10-16 20:32:04.993711] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:50.108 [2024-10-16 20:32:04.993717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:50.108 [2024-10-16 20:32:04.993724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:50.108 [2024-10-16 20:32:04.993730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:50.108 [2024-10-16 20:32:04.993737] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:50.108 [2024-10-16 20:32:04.993743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:50.108 [2024-10-16 20:32:04.993750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:50.108 [2024-10-16 20:32:04.993758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:50.108 [2024-10-16 20:32:04.993765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:50.108 [2024-10-16 20:32:04.993772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:50.108 [2024-10-16 20:32:04.993778] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:50.108 [2024-10-16 20:32:04.993785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.108 [2024-10-16 20:32:04.993792] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:50.109 [2024-10-16 20:32:04.993799] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:50.109 [2024-10-16 20:32:04.993806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.109 [2024-10-16 20:32:04.993811] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:50.109 [2024-10-16 20:32:04.993821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:50.109 [2024-10-16 20:32:04.993828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.109 [2024-10-16 20:32:04.993835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.109 [2024-10-16 20:32:04.993843] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:50.109 [2024-10-16 20:32:04.993852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:50.109 [2024-10-16 20:32:04.993859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:50.109 [2024-10-16 20:32:04.993866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:50.109 [2024-10-16 20:32:04.993872] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:50.109 [2024-10-16 20:32:04.993879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:50.109 [2024-10-16 20:32:04.993887] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:50.109 [2024-10-16 20:32:04.993898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.109 [2024-10-16 20:32:04.993907] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:50.109 [2024-10-16 20:32:04.993915] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:50.109 [2024-10-16 20:32:04.993922] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:50.109 [2024-10-16 20:32:04.993929] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:50.109 [2024-10-16 20:32:04.993937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:50.109 [2024-10-16 20:32:04.993945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:50.109 [2024-10-16 20:32:04.993952] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:50.109 [2024-10-16 20:32:04.993959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:50.109 [2024-10-16 20:32:04.993966] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:50.109 [2024-10-16 20:32:04.993974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:50.109 [2024-10-16 20:32:04.993982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:50.109 [2024-10-16 20:32:04.993990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:50.109 [2024-10-16 20:32:04.993998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:50.109 [2024-10-16 20:32:04.994004] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:50.109 [2024-10-16 20:32:04.994012] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.109 [2024-10-16 20:32:04.994022] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:50.109 [2024-10-16 20:32:04.994029] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:50.109 [2024-10-16 20:32:04.994037] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:50.109 [2024-10-16 20:32:04.994058] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:50.109 [2024-10-16 20:32:04.994065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.109 [2024-10-16 20:32:04.994073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:50.109 [2024-10-16 20:32:04.994082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:24:50.109 [2024-10-16 20:32:04.994089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.109 [2024-10-16 20:32:05.012324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.109 [2024-10-16 20:32:05.012370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.109 [2024-10-16 20:32:05.012382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.191 ms 00:24:50.109 [2024-10-16 20:32:05.012397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.109 [2024-10-16 20:32:05.012487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.109 [2024-10-16 20:32:05.012497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:50.109 [2024-10-16 20:32:05.012507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:50.109 [2024-10-16 20:32:05.012516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.059725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.059775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.370 [2024-10-16 20:32:05.059788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.132 ms 00:24:50.370 [2024-10-16 20:32:05.059796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.059846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.059857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.370 [2024-10-16 20:32:05.059865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:50.370 [2024-10-16 20:32:05.059873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.060499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.060540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.370 [2024-10-16 20:32:05.060551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:24:50.370 [2024-10-16 20:32:05.060592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.060722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.060732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.370 [2024-10-16 20:32:05.060741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:50.370 [2024-10-16 20:32:05.060748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.077383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.077427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.370 [2024-10-16 20:32:05.077437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.611 ms 00:24:50.370 [2024-10-16 20:32:05.077445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.092022] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:50.370 [2024-10-16 20:32:05.092080] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:50.370 [2024-10-16 20:32:05.092093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.092102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:50.370 [2024-10-16 20:32:05.092112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.538 ms 00:24:50.370 [2024-10-16 20:32:05.092120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.118631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.118694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:50.370 [2024-10-16 20:32:05.118706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.455 ms 00:24:50.370 [2024-10-16 20:32:05.118715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.132137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.132182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:50.370 [2024-10-16 20:32:05.132195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.366 ms 00:24:50.370 [2024-10-16 20:32:05.132202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.145308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.145361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:50.370 [2024-10-16 20:32:05.145373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.057 ms 00:24:50.370 [2024-10-16 20:32:05.145381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.145770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.145783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:50.370 [2024-10-16 20:32:05.145792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:24:50.370 [2024-10-16 20:32:05.145801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.213310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.213539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:50.370 [2024-10-16 20:32:05.213564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.490 ms 00:24:50.370 [2024-10-16 20:32:05.213574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.225039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:50.370 [2024-10-16 20:32:05.227990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.228036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:50.370 [2024-10-16 20:32:05.228065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.368 ms 00:24:50.370 [2024-10-16 20:32:05.228080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.228155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.228166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:50.370 [2024-10-16 20:32:05.228176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:50.370 [2024-10-16 20:32:05.228184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.229077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.229123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:50.370 [2024-10-16 20:32:05.229135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:24:50.370 [2024-10-16 20:32:05.229144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.230517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.230557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:50.370 [2024-10-16 20:32:05.230568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.341 ms 00:24:50.370 [2024-10-16 20:32:05.230575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.230610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.230618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:50.370 [2024-10-16 20:32:05.230633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:50.370 [2024-10-16 20:32:05.230647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.230683] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:50.370 [2024-10-16 20:32:05.230694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.230704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:50.370 [2024-10-16 20:32:05.230713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:50.370 [2024-10-16 20:32:05.230721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.257183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.257232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:50.370 [2024-10-16 20:32:05.257246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.443 ms 00:24:50.370 [2024-10-16 20:32:05.257254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.257353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.370 [2024-10-16 20:32:05.257364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:50.370 [2024-10-16 20:32:05.257373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:50.370 [2024-10-16 20:32:05.257381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.370 [2024-10-16 20:32:05.258597] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.658 ms, result 0 00:24:51.758  [2024-10-16T20:32:07.632Z] Copying: 22/1024 [MB] (22 MBps) [2024-10-16T20:32:08.607Z] Copying: 38/1024 [MB] (16 MBps) [2024-10-16T20:32:09.558Z] Copying: 51/1024 [MB] (12 MBps) [2024-10-16T20:32:10.499Z] Copying: 63/1024 [MB] (12 MBps) [2024-10-16T20:32:11.440Z] Copying: 77/1024 [MB] (14 MBps) [2024-10-16T20:32:12.825Z] Copying: 88/1024 [MB] (11 MBps) [2024-10-16T20:32:13.767Z] Copying: 107/1024 [MB] (18 MBps) [2024-10-16T20:32:14.708Z] Copying: 126/1024 [MB] (19 MBps) [2024-10-16T20:32:15.649Z] Copying: 140/1024 [MB] (14 MBps) [2024-10-16T20:32:16.592Z] Copying: 161/1024 [MB] (20 MBps) [2024-10-16T20:32:17.534Z] Copying: 175/1024 [MB] (14 MBps) [2024-10-16T20:32:18.476Z] Copying: 191/1024 [MB] (16 MBps) [2024-10-16T20:32:19.866Z] Copying: 209/1024 [MB] (17 MBps) [2024-10-16T20:32:20.438Z] Copying: 223/1024 [MB] (14 MBps) [2024-10-16T20:32:21.824Z] Copying: 235/1024 [MB] (11 MBps) [2024-10-16T20:32:22.768Z] Copying: 245/1024 [MB] (10 MBps) [2024-10-16T20:32:23.712Z] Copying: 264/1024 [MB] (18 MBps) [2024-10-16T20:32:24.652Z] Copying: 275/1024 [MB] (10 MBps) [2024-10-16T20:32:25.597Z] Copying: 285/1024 [MB] (10 MBps) [2024-10-16T20:32:26.540Z] Copying: 296/1024 [MB] (10 MBps) [2024-10-16T20:32:27.482Z] Copying: 315/1024 [MB] (19 MBps) [2024-10-16T20:32:28.467Z] Copying: 340/1024 [MB] (24 MBps) [2024-10-16T20:32:29.854Z] Copying: 354/1024 [MB] (14 MBps) [2024-10-16T20:32:30.798Z] Copying: 370/1024 [MB] (16 MBps) [2024-10-16T20:32:31.742Z] Copying: 398/1024 [MB] (27 MBps) [2024-10-16T20:32:32.688Z] Copying: 424/1024 [MB] (25 MBps) [2024-10-16T20:32:33.634Z] Copying: 446/1024 [MB] (22 MBps) [2024-10-16T20:32:34.579Z] Copying: 469/1024 [MB] (23 MBps) [2024-10-16T20:32:35.524Z] Copying: 489/1024 [MB] (19 MBps) [2024-10-16T20:32:36.468Z] Copying: 511/1024 [MB] (21 MBps) [2024-10-16T20:32:37.855Z] Copying: 532/1024 [MB] (21 MBps) [2024-10-16T20:32:38.797Z] Copying: 545/1024 [MB] (12 MBps) [2024-10-16T20:32:39.741Z] Copying: 555/1024 [MB] (10 MBps) [2024-10-16T20:32:40.685Z] Copying: 567/1024 [MB] (12 MBps) [2024-10-16T20:32:41.631Z] Copying: 579/1024 [MB] (11 MBps) [2024-10-16T20:32:42.577Z] Copying: 592/1024 [MB] (12 MBps) [2024-10-16T20:32:43.520Z] Copying: 602/1024 [MB] (10 MBps) [2024-10-16T20:32:44.465Z] Copying: 612/1024 [MB] (10 MBps) [2024-10-16T20:32:45.852Z] Copying: 629/1024 [MB] (16 MBps) [2024-10-16T20:32:46.795Z] Copying: 645/1024 [MB] (15 MBps) [2024-10-16T20:32:47.738Z] Copying: 659/1024 [MB] (13 MBps) [2024-10-16T20:32:48.727Z] Copying: 676/1024 [MB] (17 MBps) [2024-10-16T20:32:49.669Z] Copying: 688/1024 [MB] (12 MBps) [2024-10-16T20:32:50.609Z] Copying: 699/1024 [MB] (10 MBps) [2024-10-16T20:32:51.553Z] Copying: 711/1024 [MB] (11 MBps) [2024-10-16T20:32:52.497Z] Copying: 729/1024 [MB] (18 MBps) [2024-10-16T20:32:53.442Z] Copying: 743/1024 [MB] (14 MBps) [2024-10-16T20:32:54.831Z] Copying: 755/1024 [MB] (11 MBps) [2024-10-16T20:32:55.776Z] Copying: 766/1024 [MB] (10 MBps) [2024-10-16T20:32:56.720Z] Copying: 777/1024 [MB] (11 MBps) [2024-10-16T20:32:57.664Z] Copying: 789/1024 [MB] (12 MBps) [2024-10-16T20:32:58.609Z] Copying: 812/1024 [MB] (22 MBps) [2024-10-16T20:32:59.551Z] Copying: 829/1024 [MB] (16 MBps) [2024-10-16T20:33:00.495Z] Copying: 844/1024 [MB] (15 MBps) [2024-10-16T20:33:01.439Z] Copying: 859/1024 [MB] (15 MBps) [2024-10-16T20:33:02.826Z] Copying: 884/1024 [MB] (24 MBps) [2024-10-16T20:33:03.769Z] Copying: 904/1024 [MB] (19 MBps) [2024-10-16T20:33:04.712Z] Copying: 926/1024 [MB] (22 MBps) [2024-10-16T20:33:05.656Z] Copying: 942/1024 [MB] (16 MBps) [2024-10-16T20:33:06.600Z] Copying: 969/1024 [MB] (27 MBps) [2024-10-16T20:33:07.542Z] Copying: 986/1024 [MB] (16 MBps) [2024-10-16T20:33:08.490Z] Copying: 1008/1024 [MB] (22 MBps) [2024-10-16T20:33:08.490Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-16 20:33:08.201310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.201392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:53.561 [2024-10-16 20:33:08.201409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:53.561 [2024-10-16 20:33:08.201419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.201445] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.561 [2024-10-16 20:33:08.204713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.204767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:53.561 [2024-10-16 20:33:08.204779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:25:53.561 [2024-10-16 20:33:08.204788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.205220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.205269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:53.561 [2024-10-16 20:33:08.205295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:25:53.561 [2024-10-16 20:33:08.205318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.210234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.210279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:53.561 [2024-10-16 20:33:08.210298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:25:53.561 [2024-10-16 20:33:08.210307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.216763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.216927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:53.561 [2024-10-16 20:33:08.216947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.431 ms 00:25:53.561 [2024-10-16 20:33:08.216955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.243908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.244104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:53.561 [2024-10-16 20:33:08.244125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.877 ms 00:25:53.561 [2024-10-16 20:33:08.244133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.260352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.260401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:53.561 [2024-10-16 20:33:08.260414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.115 ms 00:25:53.561 [2024-10-16 20:33:08.260429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.270458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.270507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:53.561 [2024-10-16 20:33:08.270519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.958 ms 00:25:53.561 [2024-10-16 20:33:08.270526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.296015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.296207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:53.561 [2024-10-16 20:33:08.296228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.473 ms 00:25:53.561 [2024-10-16 20:33:08.296236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.321681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.321724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:53.561 [2024-10-16 20:33:08.321748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.408 ms 00:25:53.561 [2024-10-16 20:33:08.321755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.346643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.346688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:53.561 [2024-10-16 20:33:08.346700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.844 ms 00:25:53.561 [2024-10-16 20:33:08.346706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.371488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.561 [2024-10-16 20:33:08.371533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:53.561 [2024-10-16 20:33:08.371544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.697 ms 00:25:53.561 [2024-10-16 20:33:08.371551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.561 [2024-10-16 20:33:08.371592] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:53.561 [2024-10-16 20:33:08.371613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:53.562 [2024-10-16 20:33:08.371625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:25:53.562 [2024-10-16 20:33:08.371633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.371996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:53.562 [2024-10-16 20:33:08.372385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:53.563 [2024-10-16 20:33:08.372393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:53.563 [2024-10-16 20:33:08.372400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:53.563 [2024-10-16 20:33:08.372408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:53.563 [2024-10-16 20:33:08.372424] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:53.563 [2024-10-16 20:33:08.372433] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86454fe5-dc7e-4923-91b2-585fdda5c058 00:25:53.563 [2024-10-16 20:33:08.372452] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:25:53.563 [2024-10-16 20:33:08.372459] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:53.563 [2024-10-16 20:33:08.372467] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:53.563 [2024-10-16 20:33:08.372475] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:53.563 [2024-10-16 20:33:08.372483] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:53.563 [2024-10-16 20:33:08.372491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:53.563 [2024-10-16 20:33:08.372498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:53.563 [2024-10-16 20:33:08.372512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:53.563 [2024-10-16 20:33:08.372519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:53.563 [2024-10-16 20:33:08.372526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.563 [2024-10-16 20:33:08.372533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:53.563 [2024-10-16 20:33:08.372546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:25:53.563 [2024-10-16 20:33:08.372553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.563 [2024-10-16 20:33:08.385997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.563 [2024-10-16 20:33:08.386039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:53.563 [2024-10-16 20:33:08.386067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.411 ms 00:25:53.563 [2024-10-16 20:33:08.386075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.563 [2024-10-16 20:33:08.386324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.563 [2024-10-16 20:33:08.386336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:53.563 [2024-10-16 20:33:08.386345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:25:53.563 [2024-10-16 20:33:08.386353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.563 [2024-10-16 20:33:08.424857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.563 [2024-10-16 20:33:08.424908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.563 [2024-10-16 20:33:08.424921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.563 [2024-10-16 20:33:08.424929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.563 [2024-10-16 20:33:08.424993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.563 [2024-10-16 20:33:08.425002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.563 [2024-10-16 20:33:08.425010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.563 [2024-10-16 20:33:08.425018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.563 [2024-10-16 20:33:08.425116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.563 [2024-10-16 20:33:08.425128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.563 [2024-10-16 20:33:08.425136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.563 [2024-10-16 20:33:08.425144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.563 [2024-10-16 20:33:08.425159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.563 [2024-10-16 20:33:08.425172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.563 [2024-10-16 20:33:08.425180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.563 [2024-10-16 20:33:08.425211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.843 [2024-10-16 20:33:08.505560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.843 [2024-10-16 20:33:08.505604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.843 [2024-10-16 20:33:08.505616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.843 [2024-10-16 20:33:08.505624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.843 [2024-10-16 20:33:08.536015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.843 [2024-10-16 20:33:08.536094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.843 [2024-10-16 20:33:08.536106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.843 [2024-10-16 20:33:08.536114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.843 [2024-10-16 20:33:08.536188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.843 [2024-10-16 20:33:08.536222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.843 [2024-10-16 20:33:08.536231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.843 [2024-10-16 20:33:08.536240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.843 [2024-10-16 20:33:08.536283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.843 [2024-10-16 20:33:08.536293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.843 [2024-10-16 20:33:08.536306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.843 [2024-10-16 20:33:08.536314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.843 [2024-10-16 20:33:08.536417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.843 [2024-10-16 20:33:08.536428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.843 [2024-10-16 20:33:08.536436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.843 [2024-10-16 20:33:08.536470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.843 [2024-10-16 20:33:08.536504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.843 [2024-10-16 20:33:08.536513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:53.843 [2024-10-16 20:33:08.536522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.843 [2024-10-16 20:33:08.536533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.843 [2024-10-16 20:33:08.536575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.843 [2024-10-16 20:33:08.536585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.843 [2024-10-16 20:33:08.536593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.843 [2024-10-16 20:33:08.536601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.843 [2024-10-16 20:33:08.536647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.843 [2024-10-16 20:33:08.536657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.843 [2024-10-16 20:33:08.536667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.843 [2024-10-16 20:33:08.536675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.843 [2024-10-16 20:33:08.536805] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.465 ms, result 0 00:25:54.796 00:25:54.796 00:25:54.796 20:33:09 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:57.342 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:57.342 20:33:11 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:57.342 20:33:11 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:57.342 20:33:11 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:57.342 20:33:11 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:57.342 20:33:11 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:57.342 20:33:11 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:57.342 20:33:11 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:57.342 Process with pid 75638 is not found 00:25:57.342 20:33:11 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75638 00:25:57.342 20:33:11 -- common/autotest_common.sh@926 -- # '[' -z 75638 ']' 00:25:57.342 20:33:11 -- common/autotest_common.sh@930 -- # kill -0 75638 00:25:57.342 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (75638) - No such process 00:25:57.342 20:33:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 75638 is not found' 00:25:57.342 20:33:11 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:57.342 Remove shared memory files 00:25:57.342 20:33:12 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:57.342 20:33:12 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:57.342 20:33:12 -- ftl/common.sh@205 -- # rm -f rm -f 00:25:57.342 20:33:12 -- ftl/common.sh@206 -- # rm -f rm -f 00:25:57.342 20:33:12 -- ftl/common.sh@207 -- # rm -f rm -f 00:25:57.342 20:33:12 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:57.342 20:33:12 -- ftl/common.sh@209 -- # rm -f rm -f 00:25:57.342 ************************************ 00:25:57.342 END TEST ftl_dirty_shutdown 00:25:57.342 ************************************ 00:25:57.342 00:25:57.342 real 4m12.671s 00:25:57.342 user 4m45.537s 00:25:57.342 sys 0m27.885s 00:25:57.342 20:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.342 20:33:12 -- common/autotest_common.sh@10 -- # set +x 00:25:57.342 20:33:12 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:57.342 20:33:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:57.342 20:33:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:57.342 20:33:12 -- common/autotest_common.sh@10 -- # set +x 00:25:57.342 ************************************ 00:25:57.342 START TEST ftl_upgrade_shutdown 00:25:57.342 ************************************ 00:25:57.342 20:33:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:57.603 * Looking for test storage... 00:25:57.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.603 20:33:12 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:57.603 20:33:12 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:57.603 20:33:12 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.603 20:33:12 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.604 20:33:12 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:57.604 20:33:12 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:57.604 20:33:12 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:57.604 20:33:12 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:57.604 20:33:12 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:57.604 20:33:12 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.604 20:33:12 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.604 20:33:12 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:57.604 20:33:12 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:57.604 20:33:12 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:57.604 20:33:12 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:57.604 20:33:12 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:57.604 20:33:12 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:57.604 20:33:12 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.604 20:33:12 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.604 20:33:12 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:57.604 20:33:12 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:57.604 20:33:12 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:57.604 20:33:12 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:57.604 20:33:12 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:57.604 20:33:12 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:57.604 20:33:12 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:57.604 20:33:12 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:57.604 20:33:12 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.604 20:33:12 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:57.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:57.604 20:33:12 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:57.604 20:33:12 -- ftl/common.sh@81 -- # local base_bdev= 00:25:57.604 20:33:12 -- ftl/common.sh@82 -- # local cache_bdev= 00:25:57.604 20:33:12 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:57.604 20:33:12 -- ftl/common.sh@89 -- # spdk_tgt_pid=78382 00:25:57.604 20:33:12 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:57.604 20:33:12 -- ftl/common.sh@91 -- # waitforlisten 78382 00:25:57.604 20:33:12 -- common/autotest_common.sh@819 -- # '[' -z 78382 ']' 00:25:57.604 20:33:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.604 20:33:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:57.604 20:33:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.604 20:33:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:57.604 20:33:12 -- common/autotest_common.sh@10 -- # set +x 00:25:57.604 20:33:12 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:57.604 [2024-10-16 20:33:12.398785] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:57.604 [2024-10-16 20:33:12.398927] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78382 ] 00:25:57.864 [2024-10-16 20:33:12.548008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.864 [2024-10-16 20:33:12.765617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:57.864 [2024-10-16 20:33:12.765849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.247 20:33:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:59.247 20:33:13 -- common/autotest_common.sh@852 -- # return 0 00:25:59.247 20:33:13 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:59.247 20:33:13 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:59.247 20:33:13 -- ftl/common.sh@99 -- # local params 00:25:59.247 20:33:13 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.247 20:33:13 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:59.247 20:33:13 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.247 20:33:13 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:25:59.247 20:33:13 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.247 20:33:13 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:59.247 20:33:13 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.247 20:33:13 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:25:59.247 20:33:13 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.247 20:33:13 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:59.247 20:33:13 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.247 20:33:13 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:59.247 20:33:13 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:25:59.247 20:33:13 -- ftl/common.sh@54 -- # local name=base 00:25:59.247 20:33:13 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:25:59.247 20:33:13 -- ftl/common.sh@56 -- # local size=20480 00:25:59.247 20:33:13 -- ftl/common.sh@59 -- # local base_bdev 00:25:59.247 20:33:13 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:25:59.509 20:33:14 -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:59.509 20:33:14 -- ftl/common.sh@62 -- # local base_size 00:25:59.509 20:33:14 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:59.509 20:33:14 -- common/autotest_common.sh@1357 -- # local bdev_name=basen1 00:25:59.509 20:33:14 -- common/autotest_common.sh@1358 -- # local bdev_info 00:25:59.509 20:33:14 -- common/autotest_common.sh@1359 -- # local bs 00:25:59.509 20:33:14 -- common/autotest_common.sh@1360 -- # local nb 00:25:59.509 20:33:14 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:59.509 20:33:14 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:25:59.509 { 00:25:59.509 "name": "basen1", 00:25:59.509 "aliases": [ 00:25:59.509 "3dbe8117-09c8-4320-9220-dc87ae063f40" 00:25:59.509 ], 00:25:59.509 "product_name": "NVMe disk", 00:25:59.509 "block_size": 4096, 00:25:59.509 "num_blocks": 1310720, 00:25:59.509 "uuid": "3dbe8117-09c8-4320-9220-dc87ae063f40", 00:25:59.509 "assigned_rate_limits": { 00:25:59.509 "rw_ios_per_sec": 0, 00:25:59.509 "rw_mbytes_per_sec": 0, 00:25:59.509 "r_mbytes_per_sec": 0, 00:25:59.509 "w_mbytes_per_sec": 0 00:25:59.509 }, 00:25:59.509 "claimed": true, 00:25:59.509 "claim_type": "read_many_write_one", 00:25:59.509 "zoned": false, 00:25:59.509 "supported_io_types": { 00:25:59.509 "read": true, 00:25:59.509 "write": true, 00:25:59.509 "unmap": true, 00:25:59.509 "write_zeroes": true, 00:25:59.509 "flush": true, 00:25:59.509 "reset": true, 00:25:59.509 "compare": true, 00:25:59.509 "compare_and_write": false, 00:25:59.509 "abort": true, 00:25:59.509 "nvme_admin": true, 00:25:59.509 "nvme_io": true 00:25:59.509 }, 00:25:59.509 "driver_specific": { 00:25:59.509 "nvme": [ 00:25:59.509 { 00:25:59.509 "pci_address": "0000:00:07.0", 00:25:59.509 "trid": { 00:25:59.509 "trtype": "PCIe", 00:25:59.509 "traddr": "0000:00:07.0" 00:25:59.509 }, 00:25:59.509 "ctrlr_data": { 00:25:59.509 "cntlid": 0, 00:25:59.509 "vendor_id": "0x1b36", 00:25:59.509 "model_number": "QEMU NVMe Ctrl", 00:25:59.509 "serial_number": "12341", 00:25:59.509 "firmware_revision": "8.0.0", 00:25:59.509 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:59.509 "oacs": { 00:25:59.509 "security": 0, 00:25:59.509 "format": 1, 00:25:59.509 "firmware": 0, 00:25:59.509 "ns_manage": 1 00:25:59.509 }, 00:25:59.509 "multi_ctrlr": false, 00:25:59.509 "ana_reporting": false 00:25:59.509 }, 00:25:59.509 "vs": { 00:25:59.509 "nvme_version": "1.4" 00:25:59.509 }, 00:25:59.509 "ns_data": { 00:25:59.509 "id": 1, 00:25:59.509 "can_share": false 00:25:59.509 } 00:25:59.509 } 00:25:59.509 ], 00:25:59.509 "mp_policy": "active_passive" 00:25:59.509 } 00:25:59.509 } 00:25:59.509 ]' 00:25:59.509 20:33:14 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:25:59.509 20:33:14 -- common/autotest_common.sh@1362 -- # bs=4096 00:25:59.509 20:33:14 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:25:59.770 20:33:14 -- common/autotest_common.sh@1363 -- # nb=1310720 00:25:59.770 20:33:14 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:25:59.770 20:33:14 -- common/autotest_common.sh@1367 -- # echo 5120 00:25:59.770 20:33:14 -- ftl/common.sh@63 -- # base_size=5120 00:25:59.770 20:33:14 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:59.770 20:33:14 -- ftl/common.sh@67 -- # clear_lvols 00:25:59.770 20:33:14 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:59.770 20:33:14 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:59.770 20:33:14 -- ftl/common.sh@28 -- # stores=05fe023f-2c5c-4d65-8bbb-02731182f4c9 00:25:59.770 20:33:14 -- ftl/common.sh@29 -- # for lvs in $stores 00:25:59.770 20:33:14 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05fe023f-2c5c-4d65-8bbb-02731182f4c9 00:26:00.031 20:33:14 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:00.291 20:33:15 -- ftl/common.sh@68 -- # lvs=6d6bb4ea-1fcb-4dd1-8c14-1098e19b802e 00:26:00.291 20:33:15 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 6d6bb4ea-1fcb-4dd1-8c14-1098e19b802e 00:26:00.553 20:33:15 -- ftl/common.sh@107 -- # base_bdev=300b6ffa-3209-42e3-9bc3-66b3c0d48674 00:26:00.553 20:33:15 -- ftl/common.sh@108 -- # [[ -z 300b6ffa-3209-42e3-9bc3-66b3c0d48674 ]] 00:26:00.553 20:33:15 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 300b6ffa-3209-42e3-9bc3-66b3c0d48674 5120 00:26:00.553 20:33:15 -- ftl/common.sh@35 -- # local name=cache 00:26:00.553 20:33:15 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:00.553 20:33:15 -- ftl/common.sh@37 -- # local base_bdev=300b6ffa-3209-42e3-9bc3-66b3c0d48674 00:26:00.553 20:33:15 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:00.553 20:33:15 -- ftl/common.sh@41 -- # get_bdev_size 300b6ffa-3209-42e3-9bc3-66b3c0d48674 00:26:00.553 20:33:15 -- common/autotest_common.sh@1357 -- # local bdev_name=300b6ffa-3209-42e3-9bc3-66b3c0d48674 00:26:00.553 20:33:15 -- common/autotest_common.sh@1358 -- # local bdev_info 00:26:00.553 20:33:15 -- common/autotest_common.sh@1359 -- # local bs 00:26:00.553 20:33:15 -- common/autotest_common.sh@1360 -- # local nb 00:26:00.553 20:33:15 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 300b6ffa-3209-42e3-9bc3-66b3c0d48674 00:26:00.814 20:33:15 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:26:00.814 { 00:26:00.814 "name": "300b6ffa-3209-42e3-9bc3-66b3c0d48674", 00:26:00.814 "aliases": [ 00:26:00.814 "lvs/basen1p0" 00:26:00.814 ], 00:26:00.814 "product_name": "Logical Volume", 00:26:00.814 "block_size": 4096, 00:26:00.814 "num_blocks": 5242880, 00:26:00.814 "uuid": "300b6ffa-3209-42e3-9bc3-66b3c0d48674", 00:26:00.814 "assigned_rate_limits": { 00:26:00.814 "rw_ios_per_sec": 0, 00:26:00.814 "rw_mbytes_per_sec": 0, 00:26:00.814 "r_mbytes_per_sec": 0, 00:26:00.814 "w_mbytes_per_sec": 0 00:26:00.814 }, 00:26:00.814 "claimed": false, 00:26:00.814 "zoned": false, 00:26:00.814 "supported_io_types": { 00:26:00.814 "read": true, 00:26:00.814 "write": true, 00:26:00.814 "unmap": true, 00:26:00.814 "write_zeroes": true, 00:26:00.814 "flush": false, 00:26:00.814 "reset": true, 00:26:00.814 "compare": false, 00:26:00.814 "compare_and_write": false, 00:26:00.814 "abort": false, 00:26:00.814 "nvme_admin": false, 00:26:00.814 "nvme_io": false 00:26:00.814 }, 00:26:00.814 "driver_specific": { 00:26:00.814 "lvol": { 00:26:00.814 "lvol_store_uuid": "6d6bb4ea-1fcb-4dd1-8c14-1098e19b802e", 00:26:00.814 "base_bdev": "basen1", 00:26:00.814 "thin_provision": true, 00:26:00.814 "snapshot": false, 00:26:00.814 "clone": false, 00:26:00.814 "esnap_clone": false 00:26:00.814 } 00:26:00.814 } 00:26:00.814 } 00:26:00.814 ]' 00:26:00.814 20:33:15 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:26:00.814 20:33:15 -- common/autotest_common.sh@1362 -- # bs=4096 00:26:00.814 20:33:15 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:26:00.814 20:33:15 -- common/autotest_common.sh@1363 -- # nb=5242880 00:26:00.814 20:33:15 -- common/autotest_common.sh@1366 -- # bdev_size=20480 00:26:00.814 20:33:15 -- common/autotest_common.sh@1367 -- # echo 20480 00:26:00.814 20:33:15 -- ftl/common.sh@41 -- # local base_size=1024 00:26:00.814 20:33:15 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:00.814 20:33:15 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:01.075 20:33:15 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:01.076 20:33:15 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:01.076 20:33:15 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:01.337 20:33:16 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:01.337 20:33:16 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:01.337 20:33:16 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 300b6ffa-3209-42e3-9bc3-66b3c0d48674 -c cachen1p0 --l2p_dram_limit 2 00:26:01.337 [2024-10-16 20:33:16.238524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.238591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:01.338 [2024-10-16 20:33:16.238610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:01.338 [2024-10-16 20:33:16.238622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.238687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.238698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:01.338 [2024-10-16 20:33:16.238709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:26:01.338 [2024-10-16 20:33:16.238717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.238739] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:01.338 [2024-10-16 20:33:16.239669] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:01.338 [2024-10-16 20:33:16.239716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.239724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:01.338 [2024-10-16 20:33:16.239739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.977 ms 00:26:01.338 [2024-10-16 20:33:16.239747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.239840] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID aa0955c5-6a72-4562-8a4a-eb1a7c78223e 00:26:01.338 [2024-10-16 20:33:16.241661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.241710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:01.338 [2024-10-16 20:33:16.241722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:01.338 [2024-10-16 20:33:16.241732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.250451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.250498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:01.338 [2024-10-16 20:33:16.250509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.672 ms 00:26:01.338 [2024-10-16 20:33:16.250519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.250565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.250576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:01.338 [2024-10-16 20:33:16.250585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:01.338 [2024-10-16 20:33:16.250598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.250654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.250670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:01.338 [2024-10-16 20:33:16.250678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:01.338 [2024-10-16 20:33:16.250689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.250714] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:01.338 [2024-10-16 20:33:16.255329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.255373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:01.338 [2024-10-16 20:33:16.255386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.619 ms 00:26:01.338 [2024-10-16 20:33:16.255394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.255429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.255437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:01.338 [2024-10-16 20:33:16.255447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:01.338 [2024-10-16 20:33:16.255455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.255492] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:01.338 [2024-10-16 20:33:16.255615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:01.338 [2024-10-16 20:33:16.255632] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:01.338 [2024-10-16 20:33:16.255643] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:01.338 [2024-10-16 20:33:16.255656] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:01.338 [2024-10-16 20:33:16.255666] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:01.338 [2024-10-16 20:33:16.255679] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:01.338 [2024-10-16 20:33:16.255687] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:01.338 [2024-10-16 20:33:16.255697] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:01.338 [2024-10-16 20:33:16.255705] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:01.338 [2024-10-16 20:33:16.255716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.255732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:01.338 [2024-10-16 20:33:16.255743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.225 ms 00:26:01.338 [2024-10-16 20:33:16.255751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.255816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.338 [2024-10-16 20:33:16.255824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:01.338 [2024-10-16 20:33:16.255834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:26:01.338 [2024-10-16 20:33:16.255843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.338 [2024-10-16 20:33:16.255920] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:01.338 [2024-10-16 20:33:16.255930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:01.338 [2024-10-16 20:33:16.255940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:01.338 [2024-10-16 20:33:16.255948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.338 [2024-10-16 20:33:16.255959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:01.338 [2024-10-16 20:33:16.255966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:01.338 [2024-10-16 20:33:16.255974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:01.338 [2024-10-16 20:33:16.255981] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:01.338 [2024-10-16 20:33:16.255991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:01.338 [2024-10-16 20:33:16.255998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.338 [2024-10-16 20:33:16.256006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:01.338 [2024-10-16 20:33:16.256014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:01.338 [2024-10-16 20:33:16.256024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.338 [2024-10-16 20:33:16.256031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:01.338 [2024-10-16 20:33:16.256041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:01.338 [2024-10-16 20:33:16.256072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.338 [2024-10-16 20:33:16.256083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:01.338 [2024-10-16 20:33:16.256091] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:01.338 [2024-10-16 20:33:16.256101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.338 [2024-10-16 20:33:16.256108] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:01.338 [2024-10-16 20:33:16.256118] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:01.338 [2024-10-16 20:33:16.256125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:01.338 [2024-10-16 20:33:16.256134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:01.338 [2024-10-16 20:33:16.256142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:01.338 [2024-10-16 20:33:16.256150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:01.338 [2024-10-16 20:33:16.256157] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:01.338 [2024-10-16 20:33:16.256167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:01.338 [2024-10-16 20:33:16.256173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:01.338 [2024-10-16 20:33:16.256182] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:01.338 [2024-10-16 20:33:16.256190] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:01.338 [2024-10-16 20:33:16.256198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:01.338 [2024-10-16 20:33:16.256205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:01.338 [2024-10-16 20:33:16.256216] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:01.338 [2024-10-16 20:33:16.256223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:01.339 [2024-10-16 20:33:16.256232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:01.339 [2024-10-16 20:33:16.256238] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:01.339 [2024-10-16 20:33:16.256247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.339 [2024-10-16 20:33:16.256253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:01.339 [2024-10-16 20:33:16.256263] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:01.339 [2024-10-16 20:33:16.256269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.339 [2024-10-16 20:33:16.256278] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:01.339 [2024-10-16 20:33:16.256287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:01.339 [2024-10-16 20:33:16.256296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:01.339 [2024-10-16 20:33:16.256304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.339 [2024-10-16 20:33:16.256317] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:01.339 [2024-10-16 20:33:16.256324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:01.339 [2024-10-16 20:33:16.256332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:01.339 [2024-10-16 20:33:16.256340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:01.339 [2024-10-16 20:33:16.256351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:01.339 [2024-10-16 20:33:16.256357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:01.339 [2024-10-16 20:33:16.256368] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:01.339 [2024-10-16 20:33:16.256378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.339 [2024-10-16 20:33:16.256394] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:01.339 [2024-10-16 20:33:16.256401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:01.339 [2024-10-16 20:33:16.256411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:01.339 [2024-10-16 20:33:16.256418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:01.339 [2024-10-16 20:33:16.256452] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:01.339 [2024-10-16 20:33:16.256460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:01.339 [2024-10-16 20:33:16.256470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:01.339 [2024-10-16 20:33:16.256477] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:01.339 [2024-10-16 20:33:16.256487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:01.339 [2024-10-16 20:33:16.256493] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:01.339 [2024-10-16 20:33:16.256503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:01.339 [2024-10-16 20:33:16.256510] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:01.339 [2024-10-16 20:33:16.256523] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:01.339 [2024-10-16 20:33:16.256531] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:01.339 [2024-10-16 20:33:16.256542] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.339 [2024-10-16 20:33:16.256550] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:01.339 [2024-10-16 20:33:16.256560] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:01.339 [2024-10-16 20:33:16.256567] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:01.339 [2024-10-16 20:33:16.256577] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:01.339 [2024-10-16 20:33:16.256585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.339 [2024-10-16 20:33:16.256596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:01.339 [2024-10-16 20:33:16.256603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.713 ms 00:26:01.339 [2024-10-16 20:33:16.256614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.599 [2024-10-16 20:33:16.274583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.599 [2024-10-16 20:33:16.274796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:01.599 [2024-10-16 20:33:16.274818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.924 ms 00:26:01.599 [2024-10-16 20:33:16.274829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.599 [2024-10-16 20:33:16.274876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.599 [2024-10-16 20:33:16.274890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:01.599 [2024-10-16 20:33:16.274901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:01.599 [2024-10-16 20:33:16.274911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.599 [2024-10-16 20:33:16.310190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.599 [2024-10-16 20:33:16.310238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:01.599 [2024-10-16 20:33:16.310250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.228 ms 00:26:01.599 [2024-10-16 20:33:16.310260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.599 [2024-10-16 20:33:16.310299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.599 [2024-10-16 20:33:16.310310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:01.599 [2024-10-16 20:33:16.310318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:01.599 [2024-10-16 20:33:16.310328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.599 [2024-10-16 20:33:16.310891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.599 [2024-10-16 20:33:16.310929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:01.599 [2024-10-16 20:33:16.310939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:26:01.599 [2024-10-16 20:33:16.310949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.599 [2024-10-16 20:33:16.311001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.599 [2024-10-16 20:33:16.311014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:01.599 [2024-10-16 20:33:16.311022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:01.599 [2024-10-16 20:33:16.311031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.599 [2024-10-16 20:33:16.329264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.599 [2024-10-16 20:33:16.329440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:01.599 [2024-10-16 20:33:16.329460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.196 ms 00:26:01.599 [2024-10-16 20:33:16.329470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.599 [2024-10-16 20:33:16.342795] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:01.599 [2024-10-16 20:33:16.344165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.599 [2024-10-16 20:33:16.344328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:01.599 [2024-10-16 20:33:16.344351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.595 ms 00:26:01.599 [2024-10-16 20:33:16.344359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.599 [2024-10-16 20:33:16.376380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.599 [2024-10-16 20:33:16.376446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:01.599 [2024-10-16 20:33:16.376464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.984 ms 00:26:01.599 [2024-10-16 20:33:16.376472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.599 [2024-10-16 20:33:16.376529] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:01.599 [2024-10-16 20:33:16.376541] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:05.809 [2024-10-16 20:33:20.432821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.432903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:05.809 [2024-10-16 20:33:20.432926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4056.271 ms 00:26:05.809 [2024-10-16 20:33:20.432936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.433093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.433107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:05.809 [2024-10-16 20:33:20.433124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.093 ms 00:26:05.809 [2024-10-16 20:33:20.433134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.459689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.459918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:05.809 [2024-10-16 20:33:20.459951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.492 ms 00:26:05.809 [2024-10-16 20:33:20.459961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.486018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.486081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:05.809 [2024-10-16 20:33:20.486102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.999 ms 00:26:05.809 [2024-10-16 20:33:20.486110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.486487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.486503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:05.809 [2024-10-16 20:33:20.486516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:26:05.809 [2024-10-16 20:33:20.486524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.562281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.562336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:05.809 [2024-10-16 20:33:20.562354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 75.704 ms 00:26:05.809 [2024-10-16 20:33:20.562363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.590559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.590618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:05.809 [2024-10-16 20:33:20.590635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 28.133 ms 00:26:05.809 [2024-10-16 20:33:20.590643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.592371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.592433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:05.809 [2024-10-16 20:33:20.592450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.661 ms 00:26:05.809 [2024-10-16 20:33:20.592459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.619827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.619886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:05.809 [2024-10-16 20:33:20.619904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.312 ms 00:26:05.809 [2024-10-16 20:33:20.619911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.619973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.619983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:05.809 [2024-10-16 20:33:20.619995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:05.809 [2024-10-16 20:33:20.620003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.620140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.809 [2024-10-16 20:33:20.620153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:05.809 [2024-10-16 20:33:20.620163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:26:05.809 [2024-10-16 20:33:20.620172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.809 [2024-10-16 20:33:20.621402] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4382.354 ms, result 0 00:26:05.809 { 00:26:05.809 "name": "ftl", 00:26:05.809 "uuid": "aa0955c5-6a72-4562-8a4a-eb1a7c78223e" 00:26:05.809 } 00:26:05.809 20:33:20 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:06.070 [2024-10-16 20:33:20.832485] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.070 20:33:20 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:06.331 20:33:21 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:06.331 [2024-10-16 20:33:21.224942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:06.331 20:33:21 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:06.592 [2024-10-16 20:33:21.418602] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:06.592 20:33:21 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:06.852 Fill FTL, iteration 1 00:26:06.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:06.852 20:33:21 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:06.852 20:33:21 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:06.852 20:33:21 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:06.852 20:33:21 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:06.852 20:33:21 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:06.852 20:33:21 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:06.852 20:33:21 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:06.852 20:33:21 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:06.853 20:33:21 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:06.853 20:33:21 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:06.853 20:33:21 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:06.853 20:33:21 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:06.853 20:33:21 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:06.853 20:33:21 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:06.853 20:33:21 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:06.853 20:33:21 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:06.853 20:33:21 -- ftl/common.sh@163 -- # spdk_ini_pid=78517 00:26:06.853 20:33:21 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:06.853 20:33:21 -- ftl/common.sh@165 -- # waitforlisten 78517 /var/tmp/spdk.tgt.sock 00:26:06.853 20:33:21 -- common/autotest_common.sh@819 -- # '[' -z 78517 ']' 00:26:06.853 20:33:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:06.853 20:33:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:06.853 20:33:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:06.853 20:33:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:06.853 20:33:21 -- common/autotest_common.sh@10 -- # set +x 00:26:06.853 20:33:21 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:07.114 [2024-10-16 20:33:21.834226] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:07.114 [2024-10-16 20:33:21.834635] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78517 ] 00:26:07.114 [2024-10-16 20:33:21.986337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.375 [2024-10-16 20:33:22.209650] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:07.375 [2024-10-16 20:33:22.210088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.759 20:33:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:08.759 20:33:23 -- common/autotest_common.sh@852 -- # return 0 00:26:08.759 20:33:23 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:08.759 ftln1 00:26:08.759 20:33:23 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:08.759 20:33:23 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:09.019 20:33:23 -- ftl/common.sh@173 -- # echo ']}' 00:26:09.019 20:33:23 -- ftl/common.sh@176 -- # killprocess 78517 00:26:09.019 20:33:23 -- common/autotest_common.sh@926 -- # '[' -z 78517 ']' 00:26:09.019 20:33:23 -- common/autotest_common.sh@930 -- # kill -0 78517 00:26:09.019 20:33:23 -- common/autotest_common.sh@931 -- # uname 00:26:09.019 20:33:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:09.019 20:33:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78517 00:26:09.019 killing process with pid 78517 00:26:09.019 20:33:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:09.019 20:33:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:09.019 20:33:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78517' 00:26:09.019 20:33:23 -- common/autotest_common.sh@945 -- # kill 78517 00:26:09.019 20:33:23 -- common/autotest_common.sh@950 -- # wait 78517 00:26:10.400 20:33:25 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:10.400 20:33:25 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:10.400 [2024-10-16 20:33:25.313539] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:10.400 [2024-10-16 20:33:25.313648] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78579 ] 00:26:10.660 [2024-10-16 20:33:25.461302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.919 [2024-10-16 20:33:25.598589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.305  [2024-10-16T20:33:28.173Z] Copying: 254/1024 [MB] (254 MBps) [2024-10-16T20:33:29.115Z] Copying: 507/1024 [MB] (253 MBps) [2024-10-16T20:33:30.079Z] Copying: 761/1024 [MB] (254 MBps) [2024-10-16T20:33:30.079Z] Copying: 1006/1024 [MB] (245 MBps) [2024-10-16T20:33:30.651Z] Copying: 1024/1024 [MB] (average 251 MBps) 00:26:15.722 00:26:15.722 20:33:30 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:15.722 20:33:30 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:15.722 Calculate MD5 checksum, iteration 1 00:26:15.722 20:33:30 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:15.722 20:33:30 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:15.722 20:33:30 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:15.722 20:33:30 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:15.722 20:33:30 -- ftl/common.sh@154 -- # return 0 00:26:15.722 20:33:30 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:15.983 [2024-10-16 20:33:30.660296] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:15.983 [2024-10-16 20:33:30.660518] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78632 ] 00:26:15.983 [2024-10-16 20:33:30.799779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.244 [2024-10-16 20:33:30.938749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.630  [2024-10-16T20:33:32.820Z] Copying: 656/1024 [MB] (656 MBps) [2024-10-16T20:33:33.391Z] Copying: 1024/1024 [MB] (average 669 MBps) 00:26:18.462 00:26:18.462 20:33:33 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:18.462 20:33:33 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:21.006 20:33:35 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:21.006 20:33:35 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=67a3312c0ecf792167502b098a9d44fb 00:26:21.006 20:33:35 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:21.006 20:33:35 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:21.006 20:33:35 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:21.006 Fill FTL, iteration 2 00:26:21.006 20:33:35 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:21.006 20:33:35 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:21.006 20:33:35 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:21.006 20:33:35 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:21.006 20:33:35 -- ftl/common.sh@154 -- # return 0 00:26:21.006 20:33:35 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:21.006 [2024-10-16 20:33:35.453942] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:21.006 [2024-10-16 20:33:35.454203] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78688 ] 00:26:21.006 [2024-10-16 20:33:35.599557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.006 [2024-10-16 20:33:35.737379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.389  [2024-10-16T20:33:38.261Z] Copying: 233/1024 [MB] (233 MBps) [2024-10-16T20:33:39.205Z] Copying: 480/1024 [MB] (247 MBps) [2024-10-16T20:33:40.147Z] Copying: 723/1024 [MB] (243 MBps) [2024-10-16T20:33:40.407Z] Copying: 969/1024 [MB] (246 MBps) [2024-10-16T20:33:40.978Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:26:26.049 00:26:26.049 Calculate MD5 checksum, iteration 2 00:26:26.049 20:33:40 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:26.049 20:33:40 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:26.049 20:33:40 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:26.049 20:33:40 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:26.049 20:33:40 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:26.049 20:33:40 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:26.049 20:33:40 -- ftl/common.sh@154 -- # return 0 00:26:26.049 20:33:40 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:26.049 [2024-10-16 20:33:40.948810] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:26.050 [2024-10-16 20:33:40.948915] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78750 ] 00:26:26.310 [2024-10-16 20:33:41.095255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.310 [2024-10-16 20:33:41.232163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.222  [2024-10-16T20:33:43.412Z] Copying: 658/1024 [MB] (658 MBps) [2024-10-16T20:33:44.354Z] Copying: 1024/1024 [MB] (average 672 MBps) 00:26:29.425 00:26:29.425 20:33:44 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:29.425 20:33:44 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:31.336 20:33:46 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:31.336 20:33:46 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2ba2ce3a065bedd6abff8ea587f8ca4a 00:26:31.336 20:33:46 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:31.336 20:33:46 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:31.336 20:33:46 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:31.598 [2024-10-16 20:33:46.315838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.598 [2024-10-16 20:33:46.315970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:31.598 [2024-10-16 20:33:46.316028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:31.598 [2024-10-16 20:33:46.316068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.598 [2024-10-16 20:33:46.316105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.598 [2024-10-16 20:33:46.316207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:31.598 [2024-10-16 20:33:46.316263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:31.598 [2024-10-16 20:33:46.316278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.598 [2024-10-16 20:33:46.316302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.598 [2024-10-16 20:33:46.316318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:31.598 [2024-10-16 20:33:46.316339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:31.598 [2024-10-16 20:33:46.316354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.598 [2024-10-16 20:33:46.316429] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.578 ms, result 0 00:26:31.598 true 00:26:31.598 20:33:46 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:31.598 { 00:26:31.598 "name": "ftl", 00:26:31.598 "properties": [ 00:26:31.598 { 00:26:31.598 "name": "superblock_version", 00:26:31.598 "value": 5, 00:26:31.598 "read-only": true 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "name": "base_device", 00:26:31.598 "bands": [ 00:26:31.598 { 00:26:31.598 "id": 0, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 1, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 2, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 3, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 4, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 5, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 6, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 7, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 8, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 9, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 10, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 11, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 12, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 13, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 14, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 15, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 16, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 17, 00:26:31.598 "state": "FREE", 00:26:31.598 "validity": 0.0 00:26:31.598 } 00:26:31.598 ], 00:26:31.598 "read-only": true 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "name": "cache_device", 00:26:31.598 "type": "bdev", 00:26:31.598 "chunks": [ 00:26:31.598 { 00:26:31.598 "id": 0, 00:26:31.598 "state": "CLOSED", 00:26:31.598 "utilization": 1.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 1, 00:26:31.598 "state": "CLOSED", 00:26:31.598 "utilization": 1.0 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 2, 00:26:31.598 "state": "OPEN", 00:26:31.598 "utilization": 0.001953125 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "id": 3, 00:26:31.598 "state": "OPEN", 00:26:31.598 "utilization": 0.0 00:26:31.598 } 00:26:31.598 ], 00:26:31.598 "read-only": true 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "name": "verbose_mode", 00:26:31.598 "value": true, 00:26:31.598 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:31.598 }, 00:26:31.598 { 00:26:31.598 "name": "prep_upgrade_on_shutdown", 00:26:31.598 "value": false, 00:26:31.598 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:31.598 } 00:26:31.598 ] 00:26:31.598 } 00:26:31.860 20:33:46 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:31.860 [2024-10-16 20:33:46.708141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.860 [2024-10-16 20:33:46.708174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:31.860 [2024-10-16 20:33:46.708183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:31.860 [2024-10-16 20:33:46.708189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.860 [2024-10-16 20:33:46.708206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.860 [2024-10-16 20:33:46.708212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:31.860 [2024-10-16 20:33:46.708219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:31.860 [2024-10-16 20:33:46.708224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.860 [2024-10-16 20:33:46.708239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.860 [2024-10-16 20:33:46.708244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:31.860 [2024-10-16 20:33:46.708250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:31.860 [2024-10-16 20:33:46.708255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.860 [2024-10-16 20:33:46.708296] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.144 ms, result 0 00:26:31.860 true 00:26:31.860 20:33:46 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:31.860 20:33:46 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:31.860 20:33:46 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:32.121 20:33:46 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:32.121 20:33:46 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:32.121 20:33:46 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:32.383 [2024-10-16 20:33:47.092454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.383 [2024-10-16 20:33:47.092486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:32.383 [2024-10-16 20:33:47.092493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:32.383 [2024-10-16 20:33:47.092498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.383 [2024-10-16 20:33:47.092514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.383 [2024-10-16 20:33:47.092520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:32.383 [2024-10-16 20:33:47.092526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:32.383 [2024-10-16 20:33:47.092531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.383 [2024-10-16 20:33:47.092546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.383 [2024-10-16 20:33:47.092551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:32.383 [2024-10-16 20:33:47.092556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:32.383 [2024-10-16 20:33:47.092561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.383 [2024-10-16 20:33:47.092599] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.134 ms, result 0 00:26:32.383 true 00:26:32.383 20:33:47 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:32.383 { 00:26:32.383 "name": "ftl", 00:26:32.383 "properties": [ 00:26:32.383 { 00:26:32.383 "name": "superblock_version", 00:26:32.383 "value": 5, 00:26:32.383 "read-only": true 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "name": "base_device", 00:26:32.383 "bands": [ 00:26:32.383 { 00:26:32.383 "id": 0, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 1, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 2, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 3, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 4, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 5, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 6, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 7, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 8, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 9, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 10, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 11, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 12, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 13, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 14, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 15, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 16, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 17, 00:26:32.383 "state": "FREE", 00:26:32.383 "validity": 0.0 00:26:32.383 } 00:26:32.383 ], 00:26:32.383 "read-only": true 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "name": "cache_device", 00:26:32.383 "type": "bdev", 00:26:32.383 "chunks": [ 00:26:32.383 { 00:26:32.383 "id": 0, 00:26:32.383 "state": "CLOSED", 00:26:32.383 "utilization": 1.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 1, 00:26:32.383 "state": "CLOSED", 00:26:32.383 "utilization": 1.0 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 2, 00:26:32.383 "state": "OPEN", 00:26:32.383 "utilization": 0.001953125 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "id": 3, 00:26:32.383 "state": "OPEN", 00:26:32.383 "utilization": 0.0 00:26:32.383 } 00:26:32.383 ], 00:26:32.383 "read-only": true 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "name": "verbose_mode", 00:26:32.383 "value": true, 00:26:32.383 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:32.383 }, 00:26:32.383 { 00:26:32.383 "name": "prep_upgrade_on_shutdown", 00:26:32.383 "value": true, 00:26:32.383 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:32.383 } 00:26:32.383 ] 00:26:32.383 } 00:26:32.643 20:33:47 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:32.643 20:33:47 -- ftl/common.sh@130 -- # [[ -n 78382 ]] 00:26:32.643 20:33:47 -- ftl/common.sh@131 -- # killprocess 78382 00:26:32.643 20:33:47 -- common/autotest_common.sh@926 -- # '[' -z 78382 ']' 00:26:32.643 20:33:47 -- common/autotest_common.sh@930 -- # kill -0 78382 00:26:32.643 20:33:47 -- common/autotest_common.sh@931 -- # uname 00:26:32.643 20:33:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:32.643 20:33:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78382 00:26:32.643 killing process with pid 78382 00:26:32.643 20:33:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:32.643 20:33:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:32.643 20:33:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78382' 00:26:32.643 20:33:47 -- common/autotest_common.sh@945 -- # kill 78382 00:26:32.643 20:33:47 -- common/autotest_common.sh@950 -- # wait 78382 00:26:33.214 [2024-10-16 20:33:47.875127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:33.214 [2024-10-16 20:33:47.887325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.214 [2024-10-16 20:33:47.887358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:33.214 [2024-10-16 20:33:47.887368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:33.214 [2024-10-16 20:33:47.887374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.214 [2024-10-16 20:33:47.887391] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:33.214 [2024-10-16 20:33:47.889492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.214 [2024-10-16 20:33:47.889517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:33.214 [2024-10-16 20:33:47.889525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.091 ms 00:26:33.214 [2024-10-16 20:33:47.889531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.173980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.174034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:41.405 [2024-10-16 20:33:55.174059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7284.408 ms 00:26:41.405 [2024-10-16 20:33:55.174066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.175371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.175404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:41.405 [2024-10-16 20:33:55.175414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.288 ms 00:26:41.405 [2024-10-16 20:33:55.175420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.176280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.176298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:26:41.405 [2024-10-16 20:33:55.176306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.833 ms 00:26:41.405 [2024-10-16 20:33:55.176313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.184058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.184083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:41.405 [2024-10-16 20:33:55.184090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.709 ms 00:26:41.405 [2024-10-16 20:33:55.184096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.189184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.189303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:41.405 [2024-10-16 20:33:55.189316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.064 ms 00:26:41.405 [2024-10-16 20:33:55.189322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.189385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.189392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:41.405 [2024-10-16 20:33:55.189399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:41.405 [2024-10-16 20:33:55.189408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.196825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.196920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:41.405 [2024-10-16 20:33:55.196931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.405 ms 00:26:41.405 [2024-10-16 20:33:55.196936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.204296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.204320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:41.405 [2024-10-16 20:33:55.204327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.338 ms 00:26:41.405 [2024-10-16 20:33:55.204333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.211350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.211441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:41.405 [2024-10-16 20:33:55.211451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.994 ms 00:26:41.405 [2024-10-16 20:33:55.211456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.218425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.218517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:41.405 [2024-10-16 20:33:55.218527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.925 ms 00:26:41.405 [2024-10-16 20:33:55.218533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.218553] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:41.405 [2024-10-16 20:33:55.218564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:41.405 [2024-10-16 20:33:55.218571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:41.405 [2024-10-16 20:33:55.218577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:41.405 [2024-10-16 20:33:55.218583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:41.405 [2024-10-16 20:33:55.218676] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:41.405 [2024-10-16 20:33:55.218682] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: aa0955c5-6a72-4562-8a4a-eb1a7c78223e 00:26:41.405 [2024-10-16 20:33:55.218688] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:41.405 [2024-10-16 20:33:55.218694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:41.405 [2024-10-16 20:33:55.218699] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:41.405 [2024-10-16 20:33:55.218705] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:41.405 [2024-10-16 20:33:55.218710] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:41.405 [2024-10-16 20:33:55.218716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:41.405 [2024-10-16 20:33:55.218723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:41.405 [2024-10-16 20:33:55.218728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:41.405 [2024-10-16 20:33:55.218732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:41.405 [2024-10-16 20:33:55.218739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.218745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:41.405 [2024-10-16 20:33:55.218755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:26:41.405 [2024-10-16 20:33:55.218760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.228311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.228332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:41.405 [2024-10-16 20:33:55.228340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.530 ms 00:26:41.405 [2024-10-16 20:33:55.228346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.228503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.405 [2024-10-16 20:33:55.228509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:41.405 [2024-10-16 20:33:55.228516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.134 ms 00:26:41.405 [2024-10-16 20:33:55.228521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.264015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.405 [2024-10-16 20:33:55.264052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:41.405 [2024-10-16 20:33:55.264060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.405 [2024-10-16 20:33:55.264070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.264094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.405 [2024-10-16 20:33:55.264100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:41.405 [2024-10-16 20:33:55.264106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.405 [2024-10-16 20:33:55.264112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.264157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.405 [2024-10-16 20:33:55.264164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:41.405 [2024-10-16 20:33:55.264170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.405 [2024-10-16 20:33:55.264176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.264190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.405 [2024-10-16 20:33:55.264197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:41.405 [2024-10-16 20:33:55.264202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.405 [2024-10-16 20:33:55.264226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.322631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.405 [2024-10-16 20:33:55.322665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:41.405 [2024-10-16 20:33:55.322674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.405 [2024-10-16 20:33:55.322680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.405 [2024-10-16 20:33:55.345147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.406 [2024-10-16 20:33:55.345175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:41.406 [2024-10-16 20:33:55.345183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.406 [2024-10-16 20:33:55.345190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.406 [2024-10-16 20:33:55.345234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.406 [2024-10-16 20:33:55.345240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:41.406 [2024-10-16 20:33:55.345246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.406 [2024-10-16 20:33:55.345252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.406 [2024-10-16 20:33:55.345283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.406 [2024-10-16 20:33:55.345293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:41.406 [2024-10-16 20:33:55.345299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.406 [2024-10-16 20:33:55.345305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.406 [2024-10-16 20:33:55.345371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.406 [2024-10-16 20:33:55.345378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:41.406 [2024-10-16 20:33:55.345384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.406 [2024-10-16 20:33:55.345390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.406 [2024-10-16 20:33:55.345412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.406 [2024-10-16 20:33:55.345418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:41.406 [2024-10-16 20:33:55.345426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.406 [2024-10-16 20:33:55.345432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.406 [2024-10-16 20:33:55.345460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.406 [2024-10-16 20:33:55.345467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:41.406 [2024-10-16 20:33:55.345472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.406 [2024-10-16 20:33:55.345478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.406 [2024-10-16 20:33:55.345512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:41.406 [2024-10-16 20:33:55.345521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:41.406 [2024-10-16 20:33:55.345527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:41.406 [2024-10-16 20:33:55.345533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.406 [2024-10-16 20:33:55.345621] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7458.253 ms, result 0 00:26:49.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.553 20:34:03 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:49.553 20:34:03 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:49.553 20:34:03 -- ftl/common.sh@81 -- # local base_bdev= 00:26:49.553 20:34:03 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:49.553 20:34:03 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:49.553 20:34:03 -- ftl/common.sh@89 -- # spdk_tgt_pid=78941 00:26:49.553 20:34:03 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:49.553 20:34:03 -- ftl/common.sh@91 -- # waitforlisten 78941 00:26:49.553 20:34:03 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:49.553 20:34:03 -- common/autotest_common.sh@819 -- # '[' -z 78941 ']' 00:26:49.553 20:34:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.553 20:34:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:49.553 20:34:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.553 20:34:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:49.553 20:34:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.553 [2024-10-16 20:34:04.038603] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:49.553 [2024-10-16 20:34:04.039509] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78941 ] 00:26:49.553 [2024-10-16 20:34:04.190930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.553 [2024-10-16 20:34:04.342245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:49.553 [2024-10-16 20:34:04.342393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.127 [2024-10-16 20:34:04.867308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:50.127 [2024-10-16 20:34:04.867359] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:50.127 [2024-10-16 20:34:05.003640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.003673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:50.127 [2024-10-16 20:34:05.003683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:50.127 [2024-10-16 20:34:05.003689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.003726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.003735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:50.127 [2024-10-16 20:34:05.003741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:50.127 [2024-10-16 20:34:05.003747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.003760] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:50.127 [2024-10-16 20:34:05.004362] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:50.127 [2024-10-16 20:34:05.004379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.004386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:50.127 [2024-10-16 20:34:05.004392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.622 ms 00:26:50.127 [2024-10-16 20:34:05.004398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.005347] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:50.127 [2024-10-16 20:34:05.015024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.015152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:50.127 [2024-10-16 20:34:05.015166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.678 ms 00:26:50.127 [2024-10-16 20:34:05.015172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.015216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.015223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:50.127 [2024-10-16 20:34:05.015229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:50.127 [2024-10-16 20:34:05.015235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.019513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.019537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:50.127 [2024-10-16 20:34:05.019544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.231 ms 00:26:50.127 [2024-10-16 20:34:05.019553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.019580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.019586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:50.127 [2024-10-16 20:34:05.019592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:50.127 [2024-10-16 20:34:05.019600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.019632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.019639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:50.127 [2024-10-16 20:34:05.019645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:50.127 [2024-10-16 20:34:05.019650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.019670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:50.127 [2024-10-16 20:34:05.022417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.022518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:50.127 [2024-10-16 20:34:05.022534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.753 ms 00:26:50.127 [2024-10-16 20:34:05.022540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.022562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.022568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:50.127 [2024-10-16 20:34:05.022574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:50.127 [2024-10-16 20:34:05.022579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.022595] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:50.127 [2024-10-16 20:34:05.022609] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:50.127 [2024-10-16 20:34:05.022634] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:50.127 [2024-10-16 20:34:05.022646] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:50.127 [2024-10-16 20:34:05.022702] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:50.127 [2024-10-16 20:34:05.022710] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:50.127 [2024-10-16 20:34:05.022717] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:50.127 [2024-10-16 20:34:05.022725] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:50.127 [2024-10-16 20:34:05.022732] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:50.127 [2024-10-16 20:34:05.022738] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:50.127 [2024-10-16 20:34:05.022746] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:50.127 [2024-10-16 20:34:05.022751] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:50.127 [2024-10-16 20:34:05.022758] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:50.127 [2024-10-16 20:34:05.022764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.022769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:50.127 [2024-10-16 20:34:05.022775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:26:50.127 [2024-10-16 20:34:05.022780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.022827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.127 [2024-10-16 20:34:05.022833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:50.127 [2024-10-16 20:34:05.022839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:50.127 [2024-10-16 20:34:05.022844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.127 [2024-10-16 20:34:05.022901] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:50.127 [2024-10-16 20:34:05.022909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:50.127 [2024-10-16 20:34:05.022915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:50.127 [2024-10-16 20:34:05.022920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.127 [2024-10-16 20:34:05.022926] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:50.127 [2024-10-16 20:34:05.022931] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:50.127 [2024-10-16 20:34:05.022936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:50.127 [2024-10-16 20:34:05.022941] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:50.127 [2024-10-16 20:34:05.022947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:50.127 [2024-10-16 20:34:05.022952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.127 [2024-10-16 20:34:05.022957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:50.127 [2024-10-16 20:34:05.022961] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:50.127 [2024-10-16 20:34:05.022966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.127 [2024-10-16 20:34:05.022975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:50.127 [2024-10-16 20:34:05.022980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:50.127 [2024-10-16 20:34:05.022985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.127 [2024-10-16 20:34:05.022990] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:50.127 [2024-10-16 20:34:05.022994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:50.127 [2024-10-16 20:34:05.022999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.127 [2024-10-16 20:34:05.023004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:50.127 [2024-10-16 20:34:05.023009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:50.127 [2024-10-16 20:34:05.023014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:50.127 [2024-10-16 20:34:05.023019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:50.127 [2024-10-16 20:34:05.023023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:50.127 [2024-10-16 20:34:05.023028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:50.127 [2024-10-16 20:34:05.023033] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:50.127 [2024-10-16 20:34:05.023038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:50.127 [2024-10-16 20:34:05.023051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:50.127 [2024-10-16 20:34:05.023056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:50.127 [2024-10-16 20:34:05.023062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:50.127 [2024-10-16 20:34:05.023066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:50.127 [2024-10-16 20:34:05.023071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:50.127 [2024-10-16 20:34:05.023076] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:50.127 [2024-10-16 20:34:05.023080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:50.127 [2024-10-16 20:34:05.023085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:50.127 [2024-10-16 20:34:05.023090] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:50.128 [2024-10-16 20:34:05.023095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.128 [2024-10-16 20:34:05.023100] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:50.128 [2024-10-16 20:34:05.023104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:50.128 [2024-10-16 20:34:05.023109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.128 [2024-10-16 20:34:05.023114] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:50.128 [2024-10-16 20:34:05.023119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:50.128 [2024-10-16 20:34:05.023124] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:50.128 [2024-10-16 20:34:05.023129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.128 [2024-10-16 20:34:05.023135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:50.128 [2024-10-16 20:34:05.023142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:50.128 [2024-10-16 20:34:05.023147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:50.128 [2024-10-16 20:34:05.023153] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:50.128 [2024-10-16 20:34:05.023157] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:50.128 [2024-10-16 20:34:05.023162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:50.128 [2024-10-16 20:34:05.023168] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:50.128 [2024-10-16 20:34:05.023175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.128 [2024-10-16 20:34:05.023184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:50.128 [2024-10-16 20:34:05.023189] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:50.128 [2024-10-16 20:34:05.023194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:50.128 [2024-10-16 20:34:05.023199] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:50.128 [2024-10-16 20:34:05.023205] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:50.128 [2024-10-16 20:34:05.023215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:50.128 [2024-10-16 20:34:05.023220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:50.128 [2024-10-16 20:34:05.023225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:50.128 [2024-10-16 20:34:05.023230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:50.128 [2024-10-16 20:34:05.023235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:50.128 [2024-10-16 20:34:05.023241] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:50.128 [2024-10-16 20:34:05.023246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:50.128 [2024-10-16 20:34:05.023252] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:50.128 [2024-10-16 20:34:05.023257] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:50.128 [2024-10-16 20:34:05.023262] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.128 [2024-10-16 20:34:05.023268] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:50.128 [2024-10-16 20:34:05.023273] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:50.128 [2024-10-16 20:34:05.023279] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:50.128 [2024-10-16 20:34:05.023284] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:50.128 [2024-10-16 20:34:05.023290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.128 [2024-10-16 20:34:05.023295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:50.128 [2024-10-16 20:34:05.023301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:26:50.128 [2024-10-16 20:34:05.023308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.128 [2024-10-16 20:34:05.034984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.128 [2024-10-16 20:34:05.035012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:50.128 [2024-10-16 20:34:05.035020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.644 ms 00:26:50.128 [2024-10-16 20:34:05.035025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.128 [2024-10-16 20:34:05.035065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.128 [2024-10-16 20:34:05.035071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:50.128 [2024-10-16 20:34:05.035077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:50.128 [2024-10-16 20:34:05.035083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.058767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.058793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:50.390 [2024-10-16 20:34:05.058802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.645 ms 00:26:50.390 [2024-10-16 20:34:05.058809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.058828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.058835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:50.390 [2024-10-16 20:34:05.058842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:50.390 [2024-10-16 20:34:05.058848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.059171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.059186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:50.390 [2024-10-16 20:34:05.059193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:26:50.390 [2024-10-16 20:34:05.059199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.059228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.059234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:50.390 [2024-10-16 20:34:05.059241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:50.390 [2024-10-16 20:34:05.059246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.071052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.071075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:50.390 [2024-10-16 20:34:05.071082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.789 ms 00:26:50.390 [2024-10-16 20:34:05.071089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.080992] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:50.390 [2024-10-16 20:34:05.081019] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:50.390 [2024-10-16 20:34:05.081027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.081033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:50.390 [2024-10-16 20:34:05.081051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.866 ms 00:26:50.390 [2024-10-16 20:34:05.081062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.091546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.091570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:50.390 [2024-10-16 20:34:05.091578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.447 ms 00:26:50.390 [2024-10-16 20:34:05.091585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.100371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.100469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:50.390 [2024-10-16 20:34:05.100481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.755 ms 00:26:50.390 [2024-10-16 20:34:05.100487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.109331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.109356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:50.390 [2024-10-16 20:34:05.109363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.819 ms 00:26:50.390 [2024-10-16 20:34:05.109368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.109648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.109665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:50.390 [2024-10-16 20:34:05.109672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:26:50.390 [2024-10-16 20:34:05.109677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.155306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.155410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:50.390 [2024-10-16 20:34:05.155422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 45.614 ms 00:26:50.390 [2024-10-16 20:34:05.155428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.163283] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:50.390 [2024-10-16 20:34:05.163794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.163817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:50.390 [2024-10-16 20:34:05.163825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.339 ms 00:26:50.390 [2024-10-16 20:34:05.163833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.163874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.163881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:50.390 [2024-10-16 20:34:05.163888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:50.390 [2024-10-16 20:34:05.163894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.163923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.163930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:50.390 [2024-10-16 20:34:05.163937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:50.390 [2024-10-16 20:34:05.163943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.390 [2024-10-16 20:34:05.164893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.390 [2024-10-16 20:34:05.164919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:50.390 [2024-10-16 20:34:05.164926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.934 ms 00:26:50.390 [2024-10-16 20:34:05.164932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.391 [2024-10-16 20:34:05.164951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.391 [2024-10-16 20:34:05.164957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:50.391 [2024-10-16 20:34:05.164962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:50.391 [2024-10-16 20:34:05.164967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.391 [2024-10-16 20:34:05.164995] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:50.391 [2024-10-16 20:34:05.165003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.391 [2024-10-16 20:34:05.165011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:50.391 [2024-10-16 20:34:05.165017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:50.391 [2024-10-16 20:34:05.165022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.391 [2024-10-16 20:34:05.182410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.391 [2024-10-16 20:34:05.182504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:50.391 [2024-10-16 20:34:05.182516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.374 ms 00:26:50.391 [2024-10-16 20:34:05.182521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.391 [2024-10-16 20:34:05.182573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.391 [2024-10-16 20:34:05.182580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:50.391 [2024-10-16 20:34:05.182586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:50.391 [2024-10-16 20:34:05.182592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.391 [2024-10-16 20:34:05.183351] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 179.374 ms, result 0 00:26:50.391 [2024-10-16 20:34:05.198729] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.391 [2024-10-16 20:34:05.214727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:50.391 [2024-10-16 20:34:05.222827] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:50.652 20:34:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:50.652 20:34:05 -- common/autotest_common.sh@852 -- # return 0 00:26:50.652 20:34:05 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:50.652 20:34:05 -- ftl/common.sh@95 -- # return 0 00:26:50.652 20:34:05 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:50.913 [2024-10-16 20:34:05.639691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.913 [2024-10-16 20:34:05.639727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:50.913 [2024-10-16 20:34:05.639737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:50.913 [2024-10-16 20:34:05.639743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.913 [2024-10-16 20:34:05.639760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.913 [2024-10-16 20:34:05.639767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:50.913 [2024-10-16 20:34:05.639773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:50.913 [2024-10-16 20:34:05.639781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.914 [2024-10-16 20:34:05.639796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.914 [2024-10-16 20:34:05.639802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:50.914 [2024-10-16 20:34:05.639808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:50.914 [2024-10-16 20:34:05.639814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.914 [2024-10-16 20:34:05.639858] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.161 ms, result 0 00:26:50.914 true 00:26:50.914 20:34:05 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:50.914 { 00:26:50.914 "name": "ftl", 00:26:50.914 "properties": [ 00:26:50.914 { 00:26:50.914 "name": "superblock_version", 00:26:50.914 "value": 5, 00:26:50.914 "read-only": true 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "name": "base_device", 00:26:50.914 "bands": [ 00:26:50.914 { 00:26:50.914 "id": 0, 00:26:50.914 "state": "CLOSED", 00:26:50.914 "validity": 1.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 1, 00:26:50.914 "state": "CLOSED", 00:26:50.914 "validity": 1.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 2, 00:26:50.914 "state": "CLOSED", 00:26:50.914 "validity": 0.007843137254901933 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 3, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 4, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 5, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 6, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 7, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 8, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 9, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 10, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 11, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 12, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 13, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 14, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 15, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 16, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 17, 00:26:50.914 "state": "FREE", 00:26:50.914 "validity": 0.0 00:26:50.914 } 00:26:50.914 ], 00:26:50.914 "read-only": true 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "name": "cache_device", 00:26:50.914 "type": "bdev", 00:26:50.914 "chunks": [ 00:26:50.914 { 00:26:50.914 "id": 0, 00:26:50.914 "state": "OPEN", 00:26:50.914 "utilization": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 1, 00:26:50.914 "state": "OPEN", 00:26:50.914 "utilization": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 2, 00:26:50.914 "state": "FREE", 00:26:50.914 "utilization": 0.0 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "id": 3, 00:26:50.914 "state": "FREE", 00:26:50.914 "utilization": 0.0 00:26:50.914 } 00:26:50.914 ], 00:26:50.914 "read-only": true 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "name": "verbose_mode", 00:26:50.914 "value": true, 00:26:50.914 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:50.914 }, 00:26:50.914 { 00:26:50.914 "name": "prep_upgrade_on_shutdown", 00:26:50.914 "value": false, 00:26:50.914 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:50.914 } 00:26:50.914 ] 00:26:50.914 } 00:26:50.914 20:34:05 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:50.914 20:34:05 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:50.914 20:34:05 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:51.175 20:34:06 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:51.175 20:34:06 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:51.175 20:34:06 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:51.175 20:34:06 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:51.175 20:34:06 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:51.436 Validate MD5 checksum, iteration 1 00:26:51.436 20:34:06 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:51.436 20:34:06 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:51.436 20:34:06 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:51.436 20:34:06 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:51.436 20:34:06 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:51.436 20:34:06 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:51.436 20:34:06 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:51.436 20:34:06 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:51.436 20:34:06 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:51.436 20:34:06 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:51.436 20:34:06 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:51.436 20:34:06 -- ftl/common.sh@154 -- # return 0 00:26:51.436 20:34:06 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:51.436 [2024-10-16 20:34:06.238621] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:51.436 [2024-10-16 20:34:06.238854] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78986 ] 00:26:51.698 [2024-10-16 20:34:06.383247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.698 [2024-10-16 20:34:06.551824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.614  [2024-10-16T20:34:09.115Z] Copying: 602/1024 [MB] (602 MBps) [2024-10-16T20:34:10.499Z] Copying: 1024/1024 [MB] (average 537 MBps) 00:26:55.570 00:26:55.570 20:34:10 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:55.570 20:34:10 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:57.482 20:34:12 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:57.482 Validate MD5 checksum, iteration 2 00:26:57.482 20:34:12 -- ftl/upgrade_shutdown.sh@103 -- # sum=67a3312c0ecf792167502b098a9d44fb 00:26:57.482 20:34:12 -- ftl/upgrade_shutdown.sh@105 -- # [[ 67a3312c0ecf792167502b098a9d44fb != \6\7\a\3\3\1\2\c\0\e\c\f\7\9\2\1\6\7\5\0\2\b\0\9\8\a\9\d\4\4\f\b ]] 00:26:57.482 20:34:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:57.482 20:34:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:57.482 20:34:12 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:57.482 20:34:12 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:57.482 20:34:12 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:57.482 20:34:12 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:57.482 20:34:12 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:57.482 20:34:12 -- ftl/common.sh@154 -- # return 0 00:26:57.482 20:34:12 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:57.482 [2024-10-16 20:34:12.102450] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:57.482 [2024-10-16 20:34:12.102557] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79048 ] 00:26:57.482 [2024-10-16 20:34:12.252767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.744 [2024-10-16 20:34:12.480130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.130  [2024-10-16T20:34:14.628Z] Copying: 579/1024 [MB] (579 MBps) [2024-10-16T20:34:17.926Z] Copying: 1024/1024 [MB] (average 656 MBps) 00:27:02.997 00:27:02.997 20:34:17 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:02.997 20:34:17 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:04.910 20:34:19 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:04.910 20:34:19 -- ftl/upgrade_shutdown.sh@103 -- # sum=2ba2ce3a065bedd6abff8ea587f8ca4a 00:27:04.910 20:34:19 -- ftl/upgrade_shutdown.sh@105 -- # [[ 2ba2ce3a065bedd6abff8ea587f8ca4a != \2\b\a\2\c\e\3\a\0\6\5\b\e\d\d\6\a\b\f\f\8\e\a\5\8\7\f\8\c\a\4\a ]] 00:27:04.910 20:34:19 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:04.910 20:34:19 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:04.910 20:34:19 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:04.910 20:34:19 -- ftl/common.sh@137 -- # [[ -n 78941 ]] 00:27:04.910 20:34:19 -- ftl/common.sh@138 -- # kill -9 78941 00:27:04.910 20:34:19 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:04.910 20:34:19 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:04.910 20:34:19 -- ftl/common.sh@81 -- # local base_bdev= 00:27:04.910 20:34:19 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:04.910 20:34:19 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:04.910 20:34:19 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:04.910 20:34:19 -- ftl/common.sh@89 -- # spdk_tgt_pid=79136 00:27:04.910 20:34:19 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:04.910 20:34:19 -- ftl/common.sh@91 -- # waitforlisten 79136 00:27:04.910 20:34:19 -- common/autotest_common.sh@819 -- # '[' -z 79136 ']' 00:27:04.910 20:34:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.910 20:34:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:04.910 20:34:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.910 20:34:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:04.910 20:34:19 -- common/autotest_common.sh@10 -- # set +x 00:27:04.910 [2024-10-16 20:34:19.669197] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:04.910 [2024-10-16 20:34:19.669460] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79136 ] 00:27:04.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 818: 78941 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:04.910 [2024-10-16 20:34:19.822937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.171 [2024-10-16 20:34:19.975531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:05.171 [2024-10-16 20:34:19.975683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.744 [2024-10-16 20:34:20.505232] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:05.744 [2024-10-16 20:34:20.505279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:05.744 [2024-10-16 20:34:20.641506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.744 [2024-10-16 20:34:20.641537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:05.744 [2024-10-16 20:34:20.641547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:05.744 [2024-10-16 20:34:20.641553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.744 [2024-10-16 20:34:20.641589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.744 [2024-10-16 20:34:20.641598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:05.744 [2024-10-16 20:34:20.641605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:05.744 [2024-10-16 20:34:20.641610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.744 [2024-10-16 20:34:20.641624] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:05.744 [2024-10-16 20:34:20.642175] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:05.744 [2024-10-16 20:34:20.642193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.744 [2024-10-16 20:34:20.642199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:05.744 [2024-10-16 20:34:20.642205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.572 ms 00:27:05.744 [2024-10-16 20:34:20.642211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.744 [2024-10-16 20:34:20.642475] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:05.744 [2024-10-16 20:34:20.655034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.744 [2024-10-16 20:34:20.655064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:05.744 [2024-10-16 20:34:20.655073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.559 ms 00:27:05.744 [2024-10-16 20:34:20.655078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.744 [2024-10-16 20:34:20.661734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.744 [2024-10-16 20:34:20.661758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:05.744 [2024-10-16 20:34:20.661766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:05.744 [2024-10-16 20:34:20.661771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.744 [2024-10-16 20:34:20.662008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.744 [2024-10-16 20:34:20.662021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:05.744 [2024-10-16 20:34:20.662028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:27:05.744 [2024-10-16 20:34:20.662033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.744 [2024-10-16 20:34:20.662067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.744 [2024-10-16 20:34:20.662073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:05.744 [2024-10-16 20:34:20.662079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:05.744 [2024-10-16 20:34:20.662086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.744 [2024-10-16 20:34:20.662103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.745 [2024-10-16 20:34:20.662109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:05.745 [2024-10-16 20:34:20.662115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:05.745 [2024-10-16 20:34:20.662120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.745 [2024-10-16 20:34:20.662139] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:05.745 [2024-10-16 20:34:20.664496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.745 [2024-10-16 20:34:20.664516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:05.745 [2024-10-16 20:34:20.664523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.364 ms 00:27:05.745 [2024-10-16 20:34:20.664528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.745 [2024-10-16 20:34:20.664548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.745 [2024-10-16 20:34:20.664555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:05.745 [2024-10-16 20:34:20.664562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:05.745 [2024-10-16 20:34:20.664568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.745 [2024-10-16 20:34:20.664583] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:05.745 [2024-10-16 20:34:20.664597] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:05.745 [2024-10-16 20:34:20.664621] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:05.745 [2024-10-16 20:34:20.664632] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:05.745 [2024-10-16 20:34:20.664686] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:05.745 [2024-10-16 20:34:20.664696] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:05.745 [2024-10-16 20:34:20.664704] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:05.745 [2024-10-16 20:34:20.664711] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:05.745 [2024-10-16 20:34:20.664718] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:05.745 [2024-10-16 20:34:20.664724] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:05.745 [2024-10-16 20:34:20.664730] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:05.745 [2024-10-16 20:34:20.664735] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:05.745 [2024-10-16 20:34:20.664740] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:05.745 [2024-10-16 20:34:20.664746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.745 [2024-10-16 20:34:20.664751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:05.745 [2024-10-16 20:34:20.664756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.164 ms 00:27:05.745 [2024-10-16 20:34:20.664763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.745 [2024-10-16 20:34:20.664810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.745 [2024-10-16 20:34:20.664815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:05.745 [2024-10-16 20:34:20.664820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:05.745 [2024-10-16 20:34:20.664825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.745 [2024-10-16 20:34:20.664880] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:05.745 [2024-10-16 20:34:20.664892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:05.745 [2024-10-16 20:34:20.664898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:05.745 [2024-10-16 20:34:20.664904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.745 [2024-10-16 20:34:20.664911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:05.745 [2024-10-16 20:34:20.664918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:05.745 [2024-10-16 20:34:20.664923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:05.745 [2024-10-16 20:34:20.664928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:05.745 [2024-10-16 20:34:20.664933] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:05.745 [2024-10-16 20:34:20.664938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.745 [2024-10-16 20:34:20.664943] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:05.745 [2024-10-16 20:34:20.664948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:05.745 [2024-10-16 20:34:20.664952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.745 [2024-10-16 20:34:20.664957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:05.745 [2024-10-16 20:34:20.664963] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:05.745 [2024-10-16 20:34:20.664968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.745 [2024-10-16 20:34:20.664973] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:05.745 [2024-10-16 20:34:20.664977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:05.745 [2024-10-16 20:34:20.664982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.745 [2024-10-16 20:34:20.664987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:05.745 [2024-10-16 20:34:20.664992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:05.745 [2024-10-16 20:34:20.664997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:05.745 [2024-10-16 20:34:20.665002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:05.745 [2024-10-16 20:34:20.665007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:05.745 [2024-10-16 20:34:20.665011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:05.745 [2024-10-16 20:34:20.665016] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:05.745 [2024-10-16 20:34:20.665021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:05.745 [2024-10-16 20:34:20.665025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:05.745 [2024-10-16 20:34:20.665031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:05.745 [2024-10-16 20:34:20.665036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:05.745 [2024-10-16 20:34:20.665054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:05.745 [2024-10-16 20:34:20.665059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:05.745 [2024-10-16 20:34:20.665064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:05.745 [2024-10-16 20:34:20.665069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:05.745 [2024-10-16 20:34:20.665074] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:05.745 [2024-10-16 20:34:20.665079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:05.745 [2024-10-16 20:34:20.665084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.745 [2024-10-16 20:34:20.665090] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:05.745 [2024-10-16 20:34:20.665095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:05.745 [2024-10-16 20:34:20.665100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.745 [2024-10-16 20:34:20.665105] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:05.745 [2024-10-16 20:34:20.665111] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:05.745 [2024-10-16 20:34:20.665116] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:05.745 [2024-10-16 20:34:20.665122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.745 [2024-10-16 20:34:20.665127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:05.745 [2024-10-16 20:34:20.665133] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:05.745 [2024-10-16 20:34:20.665138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:05.745 [2024-10-16 20:34:20.665143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:05.745 [2024-10-16 20:34:20.665148] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:05.745 [2024-10-16 20:34:20.665153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:05.745 [2024-10-16 20:34:20.665159] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:05.745 [2024-10-16 20:34:20.665165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.745 [2024-10-16 20:34:20.665172] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:05.745 [2024-10-16 20:34:20.665177] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:05.745 [2024-10-16 20:34:20.665182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:05.745 [2024-10-16 20:34:20.665192] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:05.745 [2024-10-16 20:34:20.665198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:05.745 [2024-10-16 20:34:20.665204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:05.745 [2024-10-16 20:34:20.665210] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:05.745 [2024-10-16 20:34:20.665215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:05.745 [2024-10-16 20:34:20.665220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:05.745 [2024-10-16 20:34:20.665225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:05.745 [2024-10-16 20:34:20.665231] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:05.745 [2024-10-16 20:34:20.665236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:05.745 [2024-10-16 20:34:20.665242] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:05.745 [2024-10-16 20:34:20.665247] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:05.745 [2024-10-16 20:34:20.665254] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.745 [2024-10-16 20:34:20.665259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:05.745 [2024-10-16 20:34:20.665265] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:05.745 [2024-10-16 20:34:20.665270] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:05.745 [2024-10-16 20:34:20.665276] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:05.745 [2024-10-16 20:34:20.665282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.746 [2024-10-16 20:34:20.665288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:05.746 [2024-10-16 20:34:20.665294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.435 ms 00:27:05.746 [2024-10-16 20:34:20.665300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.675720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.675740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:06.038 [2024-10-16 20:34:20.675750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.388 ms 00:27:06.038 [2024-10-16 20:34:20.675755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.675783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.675789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:06.038 [2024-10-16 20:34:20.675795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:06.038 [2024-10-16 20:34:20.675800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.700153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.700182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:06.038 [2024-10-16 20:34:20.700191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.318 ms 00:27:06.038 [2024-10-16 20:34:20.700198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.700221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.700228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:06.038 [2024-10-16 20:34:20.700235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:06.038 [2024-10-16 20:34:20.700241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.700305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.700320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:06.038 [2024-10-16 20:34:20.700327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:06.038 [2024-10-16 20:34:20.700333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.700359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.700367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:06.038 [2024-10-16 20:34:20.700373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:06.038 [2024-10-16 20:34:20.700378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.712413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.712443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:06.038 [2024-10-16 20:34:20.712450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.018 ms 00:27:06.038 [2024-10-16 20:34:20.712456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.712527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.712535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:06.038 [2024-10-16 20:34:20.712542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:06.038 [2024-10-16 20:34:20.712547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.725293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.725323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:06.038 [2024-10-16 20:34:20.725331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.733 ms 00:27:06.038 [2024-10-16 20:34:20.725337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.732326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.732353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:06.038 [2024-10-16 20:34:20.732360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.215 ms 00:27:06.038 [2024-10-16 20:34:20.732366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.778115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.038 [2024-10-16 20:34:20.778148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:06.038 [2024-10-16 20:34:20.778158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 45.712 ms 00:27:06.038 [2024-10-16 20:34:20.778164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.038 [2024-10-16 20:34:20.778229] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:06.038 [2024-10-16 20:34:20.778262] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:06.038 [2024-10-16 20:34:20.778292] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:06.039 [2024-10-16 20:34:20.778322] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:06.039 [2024-10-16 20:34:20.778327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.039 [2024-10-16 20:34:20.778333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:06.039 [2024-10-16 20:34:20.778341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.128 ms 00:27:06.039 [2024-10-16 20:34:20.778349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.039 [2024-10-16 20:34:20.778388] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:06.039 [2024-10-16 20:34:20.778396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.039 [2024-10-16 20:34:20.778401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:06.039 [2024-10-16 20:34:20.778407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:06.039 [2024-10-16 20:34:20.778412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.039 [2024-10-16 20:34:20.789882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.039 [2024-10-16 20:34:20.789912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:06.039 [2024-10-16 20:34:20.789921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.454 ms 00:27:06.039 [2024-10-16 20:34:20.789928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.039 [2024-10-16 20:34:20.796487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.039 [2024-10-16 20:34:20.796515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:06.039 [2024-10-16 20:34:20.796523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:06.039 [2024-10-16 20:34:20.796528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.039 [2024-10-16 20:34:20.796567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.039 [2024-10-16 20:34:20.796574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:06.039 [2024-10-16 20:34:20.796580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:06.039 [2024-10-16 20:34:20.796585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.039 [2024-10-16 20:34:20.796697] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:06.612 [2024-10-16 20:34:21.517008] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:06.612 [2024-10-16 20:34:21.517164] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:07.556 [2024-10-16 20:34:22.300925] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:07.556 [2024-10-16 20:34:22.301063] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:07.556 [2024-10-16 20:34:22.301081] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:07.556 [2024-10-16 20:34:22.301093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.556 [2024-10-16 20:34:22.301103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:07.556 [2024-10-16 20:34:22.301117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1504.489 ms 00:27:07.556 [2024-10-16 20:34:22.301127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.556 [2024-10-16 20:34:22.301176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.556 [2024-10-16 20:34:22.301186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:07.556 [2024-10-16 20:34:22.301196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:07.556 [2024-10-16 20:34:22.301204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.556 [2024-10-16 20:34:22.313769] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:07.556 [2024-10-16 20:34:22.313912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.557 [2024-10-16 20:34:22.313924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:07.557 [2024-10-16 20:34:22.313935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.690 ms 00:27:07.557 [2024-10-16 20:34:22.313943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.557 [2024-10-16 20:34:22.314667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.557 [2024-10-16 20:34:22.314695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:07.557 [2024-10-16 20:34:22.314706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.648 ms 00:27:07.557 [2024-10-16 20:34:22.314714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.557 [2024-10-16 20:34:22.316958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.557 [2024-10-16 20:34:22.316986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:07.557 [2024-10-16 20:34:22.316997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.227 ms 00:27:07.557 [2024-10-16 20:34:22.317005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.557 [2024-10-16 20:34:22.343295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.557 [2024-10-16 20:34:22.343341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:07.557 [2024-10-16 20:34:22.343354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.266 ms 00:27:07.557 [2024-10-16 20:34:22.343362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.557 [2024-10-16 20:34:22.343471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.557 [2024-10-16 20:34:22.343483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:07.557 [2024-10-16 20:34:22.343493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:07.557 [2024-10-16 20:34:22.343501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.557 [2024-10-16 20:34:22.344940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.557 [2024-10-16 20:34:22.344987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:07.557 [2024-10-16 20:34:22.344997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.420 ms 00:27:07.557 [2024-10-16 20:34:22.345004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.557 [2024-10-16 20:34:22.345037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.557 [2024-10-16 20:34:22.345061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:07.557 [2024-10-16 20:34:22.345069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:07.557 [2024-10-16 20:34:22.345077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.557 [2024-10-16 20:34:22.345124] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:07.557 [2024-10-16 20:34:22.345134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.557 [2024-10-16 20:34:22.345142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:07.557 [2024-10-16 20:34:22.345153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:07.557 [2024-10-16 20:34:22.345161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.557 [2024-10-16 20:34:22.345217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.557 [2024-10-16 20:34:22.345226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:07.557 [2024-10-16 20:34:22.345233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:07.557 [2024-10-16 20:34:22.345240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.557 [2024-10-16 20:34:22.346282] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1704.278 ms, result 0 00:27:07.557 [2024-10-16 20:34:22.359666] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.557 [2024-10-16 20:34:22.375672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:07.557 [2024-10-16 20:34:22.383825] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:07.818 20:34:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:07.818 20:34:22 -- common/autotest_common.sh@852 -- # return 0 00:27:07.818 20:34:22 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:07.818 20:34:22 -- ftl/common.sh@95 -- # return 0 00:27:07.818 20:34:22 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:07.818 20:34:22 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:07.818 20:34:22 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:07.818 20:34:22 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:07.818 Validate MD5 checksum, iteration 1 00:27:07.818 20:34:22 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:07.818 20:34:22 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:07.818 20:34:22 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:07.818 20:34:22 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:07.818 20:34:22 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:07.818 20:34:22 -- ftl/common.sh@154 -- # return 0 00:27:07.818 20:34:22 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:07.819 [2024-10-16 20:34:22.562159] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:07.819 [2024-10-16 20:34:22.562250] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79178 ] 00:27:07.819 [2024-10-16 20:34:22.708399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.080 [2024-10-16 20:34:22.969688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.994  [2024-10-16T20:34:25.183Z] Copying: 629/1024 [MB] (629 MBps) [2024-10-16T20:34:27.726Z] Copying: 1024/1024 [MB] (average 619 MBps) 00:27:12.797 00:27:12.797 20:34:27 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:12.797 20:34:27 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:14.706 20:34:29 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:14.706 Validate MD5 checksum, iteration 2 00:27:14.707 20:34:29 -- ftl/upgrade_shutdown.sh@103 -- # sum=67a3312c0ecf792167502b098a9d44fb 00:27:14.707 20:34:29 -- ftl/upgrade_shutdown.sh@105 -- # [[ 67a3312c0ecf792167502b098a9d44fb != \6\7\a\3\3\1\2\c\0\e\c\f\7\9\2\1\6\7\5\0\2\b\0\9\8\a\9\d\4\4\f\b ]] 00:27:14.707 20:34:29 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:14.707 20:34:29 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:14.707 20:34:29 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:14.707 20:34:29 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:14.707 20:34:29 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:14.707 20:34:29 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:14.707 20:34:29 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:14.707 20:34:29 -- ftl/common.sh@154 -- # return 0 00:27:14.707 20:34:29 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:14.707 [2024-10-16 20:34:29.325388] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:14.707 [2024-10-16 20:34:29.325636] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79250 ] 00:27:14.707 [2024-10-16 20:34:29.470405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.965 [2024-10-16 20:34:29.636572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.340  [2024-10-16T20:34:31.527Z] Copying: 711/1024 [MB] (711 MBps) [2024-10-16T20:34:32.465Z] Copying: 1024/1024 [MB] (average 709 MBps) 00:27:17.536 00:27:17.536 20:34:32 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:17.536 20:34:32 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@103 -- # sum=2ba2ce3a065bedd6abff8ea587f8ca4a 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@105 -- # [[ 2ba2ce3a065bedd6abff8ea587f8ca4a != \2\b\a\2\c\e\3\a\0\6\5\b\e\d\d\6\a\b\f\f\8\e\a\5\8\7\f\8\c\a\4\a ]] 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:20.082 20:34:34 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:20.082 20:34:34 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:20.082 20:34:34 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:20.082 20:34:34 -- ftl/common.sh@130 -- # [[ -n 79136 ]] 00:27:20.082 20:34:34 -- ftl/common.sh@131 -- # killprocess 79136 00:27:20.082 20:34:34 -- common/autotest_common.sh@926 -- # '[' -z 79136 ']' 00:27:20.082 20:34:34 -- common/autotest_common.sh@930 -- # kill -0 79136 00:27:20.082 20:34:34 -- common/autotest_common.sh@931 -- # uname 00:27:20.082 20:34:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:20.082 20:34:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79136 00:27:20.082 20:34:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:20.082 20:34:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:20.082 killing process with pid 79136 00:27:20.082 20:34:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79136' 00:27:20.082 20:34:34 -- common/autotest_common.sh@945 -- # kill 79136 00:27:20.082 20:34:34 -- common/autotest_common.sh@950 -- # wait 79136 00:27:20.344 [2024-10-16 20:34:35.077737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:20.344 [2024-10-16 20:34:35.088366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.088400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:20.344 [2024-10-16 20:34:35.088410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:20.344 [2024-10-16 20:34:35.088416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.088434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:20.344 [2024-10-16 20:34:35.090483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.090510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:20.344 [2024-10-16 20:34:35.090519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.037 ms 00:27:20.344 [2024-10-16 20:34:35.090524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.090722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.090734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:20.344 [2024-10-16 20:34:35.090741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:27:20.344 [2024-10-16 20:34:35.090746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.091914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.091938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:20.344 [2024-10-16 20:34:35.091945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.156 ms 00:27:20.344 [2024-10-16 20:34:35.091951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.092820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.092840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:20.344 [2024-10-16 20:34:35.092850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.849 ms 00:27:20.344 [2024-10-16 20:34:35.092856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.100518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.100544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:20.344 [2024-10-16 20:34:35.100552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.637 ms 00:27:20.344 [2024-10-16 20:34:35.100558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.104838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.104868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:20.344 [2024-10-16 20:34:35.104876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.253 ms 00:27:20.344 [2024-10-16 20:34:35.104883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.104946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.104953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:20.344 [2024-10-16 20:34:35.104960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:20.344 [2024-10-16 20:34:35.104965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.112431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.112456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:20.344 [2024-10-16 20:34:35.112462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.452 ms 00:27:20.344 [2024-10-16 20:34:35.112468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.120691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.120716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:20.344 [2024-10-16 20:34:35.120723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.199 ms 00:27:20.344 [2024-10-16 20:34:35.120728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.128351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.128375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:20.344 [2024-10-16 20:34:35.128383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.598 ms 00:27:20.344 [2024-10-16 20:34:35.128388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.135419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.135443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:20.344 [2024-10-16 20:34:35.135450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.986 ms 00:27:20.344 [2024-10-16 20:34:35.135455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.135479] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:20.344 [2024-10-16 20:34:35.135489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:20.344 [2024-10-16 20:34:35.135497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:20.344 [2024-10-16 20:34:35.135503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:20.344 [2024-10-16 20:34:35.135509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:20.344 [2024-10-16 20:34:35.135604] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:20.344 [2024-10-16 20:34:35.135612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: aa0955c5-6a72-4562-8a4a-eb1a7c78223e 00:27:20.344 [2024-10-16 20:34:35.135618] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:20.344 [2024-10-16 20:34:35.135623] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:20.344 [2024-10-16 20:34:35.135629] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:20.344 [2024-10-16 20:34:35.135634] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:20.344 [2024-10-16 20:34:35.135640] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:20.344 [2024-10-16 20:34:35.135645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:20.344 [2024-10-16 20:34:35.135651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:20.344 [2024-10-16 20:34:35.135656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:20.344 [2024-10-16 20:34:35.135661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:20.344 [2024-10-16 20:34:35.135666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.135673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:20.344 [2024-10-16 20:34:35.135679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:27:20.344 [2024-10-16 20:34:35.135686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.145588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.145611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:20.344 [2024-10-16 20:34:35.145619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.887 ms 00:27:20.344 [2024-10-16 20:34:35.145625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.145770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.344 [2024-10-16 20:34:35.145777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:20.344 [2024-10-16 20:34:35.145787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:27:20.344 [2024-10-16 20:34:35.145792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.344 [2024-10-16 20:34:35.180759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.344 [2024-10-16 20:34:35.180786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:20.344 [2024-10-16 20:34:35.180793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.180799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.180823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.180829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:20.345 [2024-10-16 20:34:35.180839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.180844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.180891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.180898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:20.345 [2024-10-16 20:34:35.180904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.180910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.180925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.180931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:20.345 [2024-10-16 20:34:35.180937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.180945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.239597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.239628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:20.345 [2024-10-16 20:34:35.239636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.239644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.262138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.262165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:20.345 [2024-10-16 20:34:35.262173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.262182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.262223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.262230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:20.345 [2024-10-16 20:34:35.262236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.262241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.262273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.262279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:20.345 [2024-10-16 20:34:35.262285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.262291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.262358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.262366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:20.345 [2024-10-16 20:34:35.262372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.262377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.262401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.262408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:20.345 [2024-10-16 20:34:35.262414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.262420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.262449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.262456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:20.345 [2024-10-16 20:34:35.262462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.262468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.262499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:20.345 [2024-10-16 20:34:35.262506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:20.345 [2024-10-16 20:34:35.262512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:20.345 [2024-10-16 20:34:35.262517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.345 [2024-10-16 20:34:35.262612] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 174.224 ms, result 0 00:27:21.289 20:34:35 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:21.289 20:34:35 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:21.289 20:34:35 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:21.289 20:34:35 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:21.289 20:34:35 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:21.289 20:34:35 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:21.289 20:34:35 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:21.289 Remove shared memory files 00:27:21.289 20:34:35 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:21.289 20:34:35 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:21.289 20:34:35 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:21.289 20:34:35 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78941 00:27:21.289 20:34:35 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:21.289 20:34:35 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:21.289 ************************************ 00:27:21.289 END TEST ftl_upgrade_shutdown 00:27:21.289 ************************************ 00:27:21.289 00:27:21.289 real 1m23.687s 00:27:21.289 user 1m54.657s 00:27:21.289 sys 0m19.331s 00:27:21.289 20:34:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.289 20:34:35 -- common/autotest_common.sh@10 -- # set +x 00:27:21.289 20:34:35 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:21.289 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:21.289 20:34:35 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:21.289 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:21.289 20:34:35 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:21.289 20:34:35 -- ftl/ftl.sh@14 -- # killprocess 70562 00:27:21.289 20:34:35 -- common/autotest_common.sh@926 -- # '[' -z 70562 ']' 00:27:21.289 20:34:35 -- common/autotest_common.sh@930 -- # kill -0 70562 00:27:21.289 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (70562) - No such process 00:27:21.289 Process with pid 70562 is not found 00:27:21.289 20:34:35 -- common/autotest_common.sh@953 -- # echo 'Process with pid 70562 is not found' 00:27:21.289 20:34:35 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:21.289 20:34:35 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79358 00:27:21.289 20:34:35 -- ftl/ftl.sh@20 -- # waitforlisten 79358 00:27:21.289 20:34:35 -- common/autotest_common.sh@819 -- # '[' -z 79358 ']' 00:27:21.289 20:34:35 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:21.289 20:34:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.289 20:34:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:21.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.289 20:34:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.289 20:34:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:21.289 20:34:35 -- common/autotest_common.sh@10 -- # set +x 00:27:21.289 [2024-10-16 20:34:36.015201] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:21.289 [2024-10-16 20:34:36.015315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79358 ] 00:27:21.289 [2024-10-16 20:34:36.162720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.550 [2024-10-16 20:34:36.310353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:21.550 [2024-10-16 20:34:36.310504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.122 20:34:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:22.122 20:34:36 -- common/autotest_common.sh@852 -- # return 0 00:27:22.122 20:34:36 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:22.384 nvme0n1 00:27:22.384 20:34:37 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:22.384 20:34:37 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:22.384 20:34:37 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:22.384 20:34:37 -- ftl/common.sh@28 -- # stores=6d6bb4ea-1fcb-4dd1-8c14-1098e19b802e 00:27:22.384 20:34:37 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:22.384 20:34:37 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d6bb4ea-1fcb-4dd1-8c14-1098e19b802e 00:27:22.643 20:34:37 -- ftl/ftl.sh@23 -- # killprocess 79358 00:27:22.643 20:34:37 -- common/autotest_common.sh@926 -- # '[' -z 79358 ']' 00:27:22.643 20:34:37 -- common/autotest_common.sh@930 -- # kill -0 79358 00:27:22.643 20:34:37 -- common/autotest_common.sh@931 -- # uname 00:27:22.643 20:34:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:22.643 20:34:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79358 00:27:22.643 20:34:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:22.643 20:34:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:22.643 killing process with pid 79358 00:27:22.643 20:34:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79358' 00:27:22.643 20:34:37 -- common/autotest_common.sh@945 -- # kill 79358 00:27:22.643 20:34:37 -- common/autotest_common.sh@950 -- # wait 79358 00:27:24.075 20:34:38 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:24.075 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:24.075 Waiting for block devices as requested 00:27:24.075 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:27:24.075 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:27:24.336 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:24.336 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:29.627 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:29.627 Remove shared memory files 00:27:29.627 20:34:44 -- ftl/ftl.sh@28 -- # remove_shm 00:27:29.627 20:34:44 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:29.627 20:34:44 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:29.627 20:34:44 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:29.627 20:34:44 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:29.627 20:34:44 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:29.627 20:34:44 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:29.627 ************************************ 00:27:29.627 END TEST ftl 00:27:29.627 ************************************ 00:27:29.627 00:27:29.627 real 12m49.003s 00:27:29.627 user 15m22.627s 00:27:29.627 sys 1m11.239s 00:27:29.627 20:34:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.627 20:34:44 -- common/autotest_common.sh@10 -- # set +x 00:27:29.627 20:34:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:29.627 20:34:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:29.627 20:34:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:29.627 20:34:44 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:29.627 20:34:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:29.627 20:34:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:29.627 20:34:44 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:29.627 20:34:44 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:27:29.627 20:34:44 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:27:29.627 20:34:44 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:27:29.627 20:34:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:29.627 20:34:44 -- common/autotest_common.sh@10 -- # set +x 00:27:29.627 20:34:44 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:27:29.627 20:34:44 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:27:29.627 20:34:44 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:27:29.627 20:34:44 -- common/autotest_common.sh@10 -- # set +x 00:27:30.571 INFO: APP EXITING 00:27:30.571 INFO: killing all VMs 00:27:30.571 INFO: killing vhost app 00:27:30.571 INFO: EXIT DONE 00:27:31.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:31.514 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:31.514 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:31.514 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:31.515 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:32.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:32.086 Cleaning 00:27:32.086 Removing: /var/run/dpdk/spdk0/config 00:27:32.086 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:32.086 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:32.086 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:32.086 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:32.086 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:32.086 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:32.086 Removing: /var/run/dpdk/spdk0 00:27:32.086 Removing: /var/run/dpdk/spdk_pid55983 00:27:32.086 Removing: /var/run/dpdk/spdk_pid56176 00:27:32.086 Removing: /var/run/dpdk/spdk_pid56459 00:27:32.086 Removing: /var/run/dpdk/spdk_pid56544 00:27:32.086 Removing: /var/run/dpdk/spdk_pid56631 00:27:32.086 Removing: /var/run/dpdk/spdk_pid56735 00:27:32.086 Removing: /var/run/dpdk/spdk_pid56812 00:27:32.086 Removing: /var/run/dpdk/spdk_pid56857 00:27:32.347 Removing: /var/run/dpdk/spdk_pid56888 00:27:32.347 Removing: /var/run/dpdk/spdk_pid56955 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57039 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57455 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57521 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57586 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57610 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57708 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57732 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57836 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57854 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57911 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57938 00:27:32.347 Removing: /var/run/dpdk/spdk_pid57991 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58011 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58164 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58206 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58286 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58358 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58389 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58456 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58482 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58523 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58549 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58595 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58616 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58657 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58683 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58730 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58756 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58797 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58817 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58858 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58881 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58922 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58949 00:27:32.347 Removing: /var/run/dpdk/spdk_pid58996 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59018 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59059 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59085 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59126 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59152 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59193 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59219 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59259 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59281 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59316 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59342 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59383 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59409 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59450 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59471 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59512 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59541 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59585 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59614 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59658 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59684 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59725 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59751 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59793 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59863 00:27:32.347 Removing: /var/run/dpdk/spdk_pid59967 00:27:32.347 Removing: /var/run/dpdk/spdk_pid60136 00:27:32.347 Removing: /var/run/dpdk/spdk_pid60228 00:27:32.347 Removing: /var/run/dpdk/spdk_pid60264 00:27:32.347 Removing: /var/run/dpdk/spdk_pid60688 00:27:32.347 Removing: /var/run/dpdk/spdk_pid61123 00:27:32.347 Removing: /var/run/dpdk/spdk_pid61232 00:27:32.347 Removing: /var/run/dpdk/spdk_pid61280 00:27:32.347 Removing: /var/run/dpdk/spdk_pid61311 00:27:32.347 Removing: /var/run/dpdk/spdk_pid61386 00:27:32.347 Removing: /var/run/dpdk/spdk_pid62036 00:27:32.347 Removing: /var/run/dpdk/spdk_pid62072 00:27:32.347 Removing: /var/run/dpdk/spdk_pid62533 00:27:32.347 Removing: /var/run/dpdk/spdk_pid62698 00:27:32.347 Removing: /var/run/dpdk/spdk_pid62818 00:27:32.347 Removing: /var/run/dpdk/spdk_pid62898 00:27:32.347 Removing: /var/run/dpdk/spdk_pid62919 00:27:32.347 Removing: /var/run/dpdk/spdk_pid62950 00:27:32.347 Removing: /var/run/dpdk/spdk_pid64871 00:27:32.347 Removing: /var/run/dpdk/spdk_pid65010 00:27:32.347 Removing: /var/run/dpdk/spdk_pid65014 00:27:32.347 Removing: /var/run/dpdk/spdk_pid65026 00:27:32.347 Removing: /var/run/dpdk/spdk_pid65092 00:27:32.347 Removing: /var/run/dpdk/spdk_pid65096 00:27:32.347 Removing: /var/run/dpdk/spdk_pid65108 00:27:32.347 Removing: /var/run/dpdk/spdk_pid65164 00:27:32.347 Removing: /var/run/dpdk/spdk_pid65168 00:27:32.347 Removing: /var/run/dpdk/spdk_pid65180 00:27:32.347 Removing: /var/run/dpdk/spdk_pid65236 00:27:32.348 Removing: /var/run/dpdk/spdk_pid65240 00:27:32.348 Removing: /var/run/dpdk/spdk_pid65252 00:27:32.348 Removing: /var/run/dpdk/spdk_pid66701 00:27:32.348 Removing: /var/run/dpdk/spdk_pid66796 00:27:32.348 Removing: /var/run/dpdk/spdk_pid66917 00:27:32.348 Removing: /var/run/dpdk/spdk_pid66993 00:27:32.348 Removing: /var/run/dpdk/spdk_pid67075 00:27:32.348 Removing: /var/run/dpdk/spdk_pid67146 00:27:32.348 Removing: /var/run/dpdk/spdk_pid67246 00:27:32.348 Removing: /var/run/dpdk/spdk_pid67320 00:27:32.348 Removing: /var/run/dpdk/spdk_pid67453 00:27:32.348 Removing: /var/run/dpdk/spdk_pid67828 00:27:32.348 Removing: /var/run/dpdk/spdk_pid67870 00:27:32.608 Removing: /var/run/dpdk/spdk_pid68308 00:27:32.608 Removing: /var/run/dpdk/spdk_pid68492 00:27:32.608 Removing: /var/run/dpdk/spdk_pid68587 00:27:32.608 Removing: /var/run/dpdk/spdk_pid68697 00:27:32.608 Removing: /var/run/dpdk/spdk_pid68750 00:27:32.608 Removing: /var/run/dpdk/spdk_pid68781 00:27:32.608 Removing: /var/run/dpdk/spdk_pid69089 00:27:32.608 Removing: /var/run/dpdk/spdk_pid69147 00:27:32.608 Removing: /var/run/dpdk/spdk_pid69227 00:27:32.608 Removing: /var/run/dpdk/spdk_pid69618 00:27:32.608 Removing: /var/run/dpdk/spdk_pid69766 00:27:32.608 Removing: /var/run/dpdk/spdk_pid70562 00:27:32.608 Removing: /var/run/dpdk/spdk_pid70685 00:27:32.608 Removing: /var/run/dpdk/spdk_pid70903 00:27:32.608 Removing: /var/run/dpdk/spdk_pid71010 00:27:32.608 Removing: /var/run/dpdk/spdk_pid71341 00:27:32.608 Removing: /var/run/dpdk/spdk_pid71587 00:27:32.608 Removing: /var/run/dpdk/spdk_pid71966 00:27:32.609 Removing: /var/run/dpdk/spdk_pid72177 00:27:32.609 Removing: /var/run/dpdk/spdk_pid72325 00:27:32.609 Removing: /var/run/dpdk/spdk_pid72372 00:27:32.609 Removing: /var/run/dpdk/spdk_pid72549 00:27:32.609 Removing: /var/run/dpdk/spdk_pid72585 00:27:32.609 Removing: /var/run/dpdk/spdk_pid72640 00:27:32.609 Removing: /var/run/dpdk/spdk_pid72896 00:27:32.609 Removing: /var/run/dpdk/spdk_pid73151 00:27:32.609 Removing: /var/run/dpdk/spdk_pid73701 00:27:32.609 Removing: /var/run/dpdk/spdk_pid74374 00:27:32.609 Removing: /var/run/dpdk/spdk_pid74918 00:27:32.609 Removing: /var/run/dpdk/spdk_pid75638 00:27:32.609 Removing: /var/run/dpdk/spdk_pid75793 00:27:32.609 Removing: /var/run/dpdk/spdk_pid75874 00:27:32.609 Removing: /var/run/dpdk/spdk_pid76469 00:27:32.609 Removing: /var/run/dpdk/spdk_pid76526 00:27:32.609 Removing: /var/run/dpdk/spdk_pid77223 00:27:32.609 Removing: /var/run/dpdk/spdk_pid77620 00:27:32.609 Removing: /var/run/dpdk/spdk_pid78382 00:27:32.609 Removing: /var/run/dpdk/spdk_pid78517 00:27:32.609 Removing: /var/run/dpdk/spdk_pid78579 00:27:32.609 Removing: /var/run/dpdk/spdk_pid78632 00:27:32.609 Removing: /var/run/dpdk/spdk_pid78688 00:27:32.609 Removing: /var/run/dpdk/spdk_pid78750 00:27:32.609 Removing: /var/run/dpdk/spdk_pid78941 00:27:32.609 Removing: /var/run/dpdk/spdk_pid78986 00:27:32.609 Removing: /var/run/dpdk/spdk_pid79048 00:27:32.609 Removing: /var/run/dpdk/spdk_pid79136 00:27:32.609 Removing: /var/run/dpdk/spdk_pid79178 00:27:32.609 Removing: /var/run/dpdk/spdk_pid79250 00:27:32.609 Removing: /var/run/dpdk/spdk_pid79358 00:27:32.609 Clean 00:27:32.609 killing process with pid 48155 00:27:32.609 killing process with pid 48156 00:27:32.609 20:34:47 -- common/autotest_common.sh@1436 -- # return 0 00:27:32.609 20:34:47 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:27:32.609 20:34:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:32.609 20:34:47 -- common/autotest_common.sh@10 -- # set +x 00:27:32.871 20:34:47 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:27:32.871 20:34:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:32.871 20:34:47 -- common/autotest_common.sh@10 -- # set +x 00:27:32.871 20:34:47 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:32.871 20:34:47 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:32.871 20:34:47 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:32.871 20:34:47 -- spdk/autotest.sh@394 -- # hash lcov 00:27:32.871 20:34:47 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:32.871 20:34:47 -- spdk/autotest.sh@396 -- # hostname 00:27:32.871 20:34:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:32.871 geninfo: WARNING: invalid characters removed from testname! 00:27:59.456 20:35:10 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:59.456 20:35:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:02.004 20:35:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:04.552 20:35:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:06.546 20:35:21 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:09.092 20:35:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:11.640 20:35:26 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:11.640 20:35:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:11.640 20:35:26 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:11.640 20:35:26 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.640 20:35:26 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.640 20:35:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.640 20:35:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.640 20:35:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.640 20:35:26 -- paths/export.sh@5 -- $ export PATH 00:28:11.640 20:35:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.640 20:35:26 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:11.640 20:35:26 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:11.640 20:35:26 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1729110926.XXXXXX 00:28:11.640 20:35:26 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1729110926.eAjDxw 00:28:11.640 20:35:26 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:11.640 20:35:26 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:11.640 20:35:26 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:11.640 20:35:26 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:11.641 20:35:26 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:11.641 20:35:26 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:11.641 20:35:26 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:28:11.641 20:35:26 -- common/autotest_common.sh@10 -- $ set +x 00:28:11.641 20:35:26 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:11.641 20:35:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:11.641 20:35:26 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:11.641 20:35:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:11.641 20:35:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:11.641 20:35:26 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:11.641 20:35:26 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:11.641 20:35:26 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:11.641 20:35:26 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:11.641 20:35:26 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:11.641 20:35:26 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:11.641 + [[ -n 4981 ]] 00:28:11.641 + sudo kill 4981 00:28:11.652 [Pipeline] } 00:28:11.666 [Pipeline] // timeout 00:28:11.671 [Pipeline] } 00:28:11.683 [Pipeline] // stage 00:28:11.687 [Pipeline] } 00:28:11.698 [Pipeline] // catchError 00:28:11.706 [Pipeline] stage 00:28:11.707 [Pipeline] { (Stop VM) 00:28:11.717 [Pipeline] sh 00:28:12.002 + vagrant halt 00:28:14.543 ==> default: Halting domain... 00:28:19.857 [Pipeline] sh 00:28:20.141 + vagrant destroy -f 00:28:22.680 ==> default: Removing domain... 00:28:23.265 [Pipeline] sh 00:28:23.549 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:23.604 [Pipeline] } 00:28:23.619 [Pipeline] // stage 00:28:23.624 [Pipeline] } 00:28:23.638 [Pipeline] // dir 00:28:23.643 [Pipeline] } 00:28:23.657 [Pipeline] // wrap 00:28:23.663 [Pipeline] } 00:28:23.676 [Pipeline] // catchError 00:28:23.686 [Pipeline] stage 00:28:23.688 [Pipeline] { (Epilogue) 00:28:23.701 [Pipeline] sh 00:28:23.988 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:28.189 [Pipeline] catchError 00:28:28.191 [Pipeline] { 00:28:28.206 [Pipeline] sh 00:28:28.494 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:28.494 Artifacts sizes are good 00:28:28.504 [Pipeline] } 00:28:28.520 [Pipeline] // catchError 00:28:28.532 [Pipeline] archiveArtifacts 00:28:28.540 Archiving artifacts 00:28:28.642 [Pipeline] cleanWs 00:28:28.655 [WS-CLEANUP] Deleting project workspace... 00:28:28.655 [WS-CLEANUP] Deferred wipeout is used... 00:28:28.662 [WS-CLEANUP] done 00:28:28.664 [Pipeline] } 00:28:28.681 [Pipeline] // stage 00:28:28.686 [Pipeline] } 00:28:28.700 [Pipeline] // node 00:28:28.706 [Pipeline] End of Pipeline 00:28:28.754 Finished: SUCCESS