00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2450 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3711 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.130 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.131 The recommended git tool is: git 00:00:00.131 using credential 00000000-0000-0000-0000-000000000002 00:00:00.134 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.155 Fetching changes from the remote Git repository 00:00:00.159 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.175 Using shallow fetch with depth 1 00:00:00.175 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.175 > git --version # timeout=10 00:00:00.199 > git --version # 'git version 2.39.2' 00:00:00.199 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.224 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.224 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.975 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.987 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.999 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.999 > git config core.sparsecheckout # timeout=10 00:00:05.010 > git read-tree -mu HEAD # timeout=10 00:00:05.024 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.048 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.048 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.143 [Pipeline] Start of Pipeline 00:00:05.153 [Pipeline] library 00:00:05.153 Loading library shm_lib@master 00:00:05.154 Library shm_lib@master is cached. Copying from home. 00:00:05.167 [Pipeline] node 00:00:05.179 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.181 [Pipeline] { 00:00:05.189 [Pipeline] catchError 00:00:05.190 [Pipeline] { 00:00:05.203 [Pipeline] wrap 00:00:05.211 [Pipeline] { 00:00:05.220 [Pipeline] stage 00:00:05.221 [Pipeline] { (Prologue) 00:00:05.240 [Pipeline] echo 00:00:05.241 Node: VM-host-SM9 00:00:05.247 [Pipeline] cleanWs 00:00:05.260 [WS-CLEANUP] Deleting project workspace... 00:00:05.260 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.265 [WS-CLEANUP] done 00:00:05.476 [Pipeline] setCustomBuildProperty 00:00:05.560 [Pipeline] httpRequest 00:00:06.003 [Pipeline] echo 00:00:06.005 Sorcerer 10.211.164.101 is alive 00:00:06.013 [Pipeline] retry 00:00:06.014 [Pipeline] { 00:00:06.028 [Pipeline] httpRequest 00:00:06.033 HttpMethod: GET 00:00:06.033 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.034 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.034 Response Code: HTTP/1.1 200 OK 00:00:06.035 Success: Status code 200 is in the accepted range: 200,404 00:00:06.036 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.470 [Pipeline] } 00:00:06.487 [Pipeline] // retry 00:00:06.494 [Pipeline] sh 00:00:06.776 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.789 [Pipeline] httpRequest 00:00:08.655 [Pipeline] echo 00:00:08.657 Sorcerer 10.211.164.101 is alive 00:00:08.667 [Pipeline] retry 00:00:08.669 [Pipeline] { 00:00:08.682 [Pipeline] httpRequest 00:00:08.687 HttpMethod: GET 00:00:08.687 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.688 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.707 Response Code: HTTP/1.1 200 OK 00:00:08.707 Success: Status code 200 is in the accepted range: 200,404 00:00:08.708 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:10.767 [Pipeline] } 00:01:10.786 [Pipeline] // retry 00:01:10.794 [Pipeline] sh 00:01:11.077 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:13.626 [Pipeline] sh 00:01:13.908 + git -C spdk log --oneline -n5 00:01:13.909 c13c99a5e test: Various fixes for Fedora40 00:01:13.909 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:13.909 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:13.909 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:13.909 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:13.928 [Pipeline] writeFile 00:01:13.943 [Pipeline] sh 00:01:14.225 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:14.235 [Pipeline] sh 00:01:14.510 + cat autorun-spdk.conf 00:01:14.510 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.510 SPDK_TEST_NVME=1 00:01:14.510 SPDK_TEST_FTL=1 00:01:14.510 SPDK_TEST_ISAL=1 00:01:14.510 SPDK_RUN_ASAN=1 00:01:14.510 SPDK_RUN_UBSAN=1 00:01:14.510 SPDK_TEST_XNVME=1 00:01:14.510 SPDK_TEST_NVME_FDP=1 00:01:14.510 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.518 RUN_NIGHTLY=1 00:01:14.519 [Pipeline] } 00:01:14.532 [Pipeline] // stage 00:01:14.545 [Pipeline] stage 00:01:14.547 [Pipeline] { (Run VM) 00:01:14.559 [Pipeline] sh 00:01:14.837 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:14.837 + echo 'Start stage prepare_nvme.sh' 00:01:14.837 Start stage prepare_nvme.sh 00:01:14.837 + [[ -n 1 ]] 00:01:14.837 + disk_prefix=ex1 00:01:14.837 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:14.837 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:14.837 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:14.837 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.837 ++ SPDK_TEST_NVME=1 00:01:14.837 ++ SPDK_TEST_FTL=1 00:01:14.837 ++ SPDK_TEST_ISAL=1 00:01:14.837 ++ SPDK_RUN_ASAN=1 00:01:14.837 ++ SPDK_RUN_UBSAN=1 00:01:14.837 ++ SPDK_TEST_XNVME=1 00:01:14.837 ++ SPDK_TEST_NVME_FDP=1 00:01:14.837 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.837 ++ RUN_NIGHTLY=1 00:01:14.837 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:14.837 + nvme_files=() 00:01:14.837 + declare -A nvme_files 00:01:14.837 + backend_dir=/var/lib/libvirt/images/backends 00:01:14.837 + nvme_files['nvme.img']=5G 00:01:14.837 + nvme_files['nvme-cmb.img']=5G 00:01:14.837 + nvme_files['nvme-multi0.img']=4G 00:01:14.837 + nvme_files['nvme-multi1.img']=4G 00:01:14.837 + nvme_files['nvme-multi2.img']=4G 00:01:14.837 + nvme_files['nvme-openstack.img']=8G 00:01:14.837 + nvme_files['nvme-zns.img']=5G 00:01:14.837 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:14.837 + (( SPDK_TEST_FTL == 1 )) 00:01:14.837 + nvme_files["nvme-ftl.img"]=6G 00:01:14.837 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:14.838 + nvme_files["nvme-fdp.img"]=1G 00:01:14.838 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:14.838 + for nvme in "${!nvme_files[@]}" 00:01:14.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:14.838 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.838 + for nvme in "${!nvme_files[@]}" 00:01:14.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:14.838 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:15.096 + for nvme in "${!nvme_files[@]}" 00:01:15.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:15.096 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.096 + for nvme in "${!nvme_files[@]}" 00:01:15.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:15.096 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:15.096 + for nvme in "${!nvme_files[@]}" 00:01:15.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:15.096 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.096 + for nvme in "${!nvme_files[@]}" 00:01:15.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:15.096 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.096 + for nvme in "${!nvme_files[@]}" 00:01:15.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:15.096 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.096 + for nvme in "${!nvme_files[@]}" 00:01:15.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:15.354 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:15.354 + for nvme in "${!nvme_files[@]}" 00:01:15.354 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:15.354 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.354 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:15.354 + echo 'End stage prepare_nvme.sh' 00:01:15.354 End stage prepare_nvme.sh 00:01:15.365 [Pipeline] sh 00:01:15.645 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:15.646 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:15.905 00:01:15.905 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:15.905 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:15.905 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:15.905 HELP=0 00:01:15.905 DRY_RUN=0 00:01:15.905 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:15.905 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:15.905 NVME_AUTO_CREATE=0 00:01:15.905 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:15.905 NVME_CMB=,,,, 00:01:15.905 NVME_PMR=,,,, 00:01:15.905 NVME_ZNS=,,,, 00:01:15.905 NVME_MS=true,,,, 00:01:15.905 NVME_FDP=,,,on, 00:01:15.905 SPDK_VAGRANT_DISTRO=fedora39 00:01:15.905 SPDK_VAGRANT_VMCPU=10 00:01:15.905 SPDK_VAGRANT_VMRAM=12288 00:01:15.905 SPDK_VAGRANT_PROVIDER=libvirt 00:01:15.905 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:15.905 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:15.905 SPDK_OPENSTACK_NETWORK=0 00:01:15.905 VAGRANT_PACKAGE_BOX=0 00:01:15.905 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:15.905 FORCE_DISTRO=true 00:01:15.905 VAGRANT_BOX_VERSION= 00:01:15.905 EXTRA_VAGRANTFILES= 00:01:15.905 NIC_MODEL=e1000 00:01:15.905 00:01:15.905 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:15.905 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:19.216 Bringing machine 'default' up with 'libvirt' provider... 00:01:19.516 ==> default: Creating image (snapshot of base box volume). 00:01:19.516 ==> default: Creating domain with the following settings... 00:01:19.516 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733690860_a693329aa6e6a2b4c2e3 00:01:19.516 ==> default: -- Domain type: kvm 00:01:19.516 ==> default: -- Cpus: 10 00:01:19.516 ==> default: -- Feature: acpi 00:01:19.516 ==> default: -- Feature: apic 00:01:19.516 ==> default: -- Feature: pae 00:01:19.516 ==> default: -- Memory: 12288M 00:01:19.516 ==> default: -- Memory Backing: hugepages: 00:01:19.516 ==> default: -- Management MAC: 00:01:19.516 ==> default: -- Loader: 00:01:19.516 ==> default: -- Nvram: 00:01:19.516 ==> default: -- Base box: spdk/fedora39 00:01:19.516 ==> default: -- Storage pool: default 00:01:19.516 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733690860_a693329aa6e6a2b4c2e3.img (20G) 00:01:19.516 ==> default: -- Volume Cache: default 00:01:19.516 ==> default: -- Kernel: 00:01:19.516 ==> default: -- Initrd: 00:01:19.516 ==> default: -- Graphics Type: vnc 00:01:19.516 ==> default: -- Graphics Port: -1 00:01:19.516 ==> default: -- Graphics IP: 127.0.0.1 00:01:19.516 ==> default: -- Graphics Password: Not defined 00:01:19.516 ==> default: -- Video Type: cirrus 00:01:19.516 ==> default: -- Video VRAM: 9216 00:01:19.516 ==> default: -- Sound Type: 00:01:19.516 ==> default: -- Keymap: en-us 00:01:19.516 ==> default: -- TPM Path: 00:01:19.516 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:19.516 ==> default: -- Command line args: 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:19.516 ==> default: -> value=-drive, 00:01:19.516 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:19.516 ==> default: -> value=-drive, 00:01:19.516 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:19.516 ==> default: -> value=-drive, 00:01:19.516 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.516 ==> default: -> value=-drive, 00:01:19.516 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.516 ==> default: -> value=-drive, 00:01:19.516 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:19.516 ==> default: -> value=-drive, 00:01:19.516 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:19.516 ==> default: -> value=-device, 00:01:19.516 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.777 ==> default: Creating shared folders metadata... 00:01:19.777 ==> default: Starting domain. 00:01:21.154 ==> default: Waiting for domain to get an IP address... 00:01:36.040 ==> default: Waiting for SSH to become available... 00:01:37.416 ==> default: Configuring and enabling network interfaces... 00:01:41.607 default: SSH address: 192.168.121.164:22 00:01:41.607 default: SSH username: vagrant 00:01:41.607 default: SSH auth method: private key 00:01:44.144 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:52.313 ==> default: Mounting SSHFS shared folder... 00:01:53.250 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:53.250 ==> default: Checking Mount.. 00:01:54.625 ==> default: Folder Successfully Mounted! 00:01:54.625 ==> default: Running provisioner: file... 00:01:55.559 default: ~/.gitconfig => .gitconfig 00:01:55.818 00:01:55.818 SUCCESS! 00:01:55.818 00:01:55.818 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:55.818 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:55.818 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:55.818 00:01:55.827 [Pipeline] } 00:01:55.842 [Pipeline] // stage 00:01:55.851 [Pipeline] dir 00:01:55.852 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:55.854 [Pipeline] { 00:01:55.866 [Pipeline] catchError 00:01:55.868 [Pipeline] { 00:01:55.881 [Pipeline] sh 00:01:56.161 + vagrant ssh-config --host vagrant 00:01:56.161 + sed -ne /^Host/,$p 00:01:56.161 + tee ssh_conf 00:01:58.693 Host vagrant 00:01:58.693 HostName 192.168.121.164 00:01:58.693 User vagrant 00:01:58.693 Port 22 00:01:58.693 UserKnownHostsFile /dev/null 00:01:58.693 StrictHostKeyChecking no 00:01:58.693 PasswordAuthentication no 00:01:58.693 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:58.693 IdentitiesOnly yes 00:01:58.693 LogLevel FATAL 00:01:58.693 ForwardAgent yes 00:01:58.693 ForwardX11 yes 00:01:58.693 00:01:58.708 [Pipeline] withEnv 00:01:58.710 [Pipeline] { 00:01:58.724 [Pipeline] sh 00:01:59.005 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:59.005 source /etc/os-release 00:01:59.005 [[ -e /image.version ]] && img=$(< /image.version) 00:01:59.005 # Minimal, systemd-like check. 00:01:59.005 if [[ -e /.dockerenv ]]; then 00:01:59.005 # Clear garbage from the node's name: 00:01:59.005 # agt-er_autotest_547-896 -> autotest_547-896 00:01:59.005 # $HOSTNAME is the actual container id 00:01:59.005 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:59.005 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:59.005 # We can assume this is a mount from a host where container is running, 00:01:59.005 # so fetch its hostname to easily identify the target swarm worker. 00:01:59.005 container="$(< /etc/hostname) ($agent)" 00:01:59.005 else 00:01:59.005 # Fallback 00:01:59.005 container=$agent 00:01:59.005 fi 00:01:59.005 fi 00:01:59.005 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:59.005 00:01:59.277 [Pipeline] } 00:01:59.298 [Pipeline] // withEnv 00:01:59.308 [Pipeline] setCustomBuildProperty 00:01:59.327 [Pipeline] stage 00:01:59.330 [Pipeline] { (Tests) 00:01:59.351 [Pipeline] sh 00:01:59.635 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:59.911 [Pipeline] sh 00:02:00.199 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:00.477 [Pipeline] timeout 00:02:00.477 Timeout set to expire in 50 min 00:02:00.479 [Pipeline] { 00:02:00.495 [Pipeline] sh 00:02:00.778 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:01.347 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:01.360 [Pipeline] sh 00:02:01.640 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:01.914 [Pipeline] sh 00:02:02.197 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:02.474 [Pipeline] sh 00:02:02.757 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:03.018 ++ readlink -f spdk_repo 00:02:03.018 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:03.018 + [[ -n /home/vagrant/spdk_repo ]] 00:02:03.018 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:03.018 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:03.018 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:03.018 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:03.018 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:03.018 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:03.018 + cd /home/vagrant/spdk_repo 00:02:03.018 + source /etc/os-release 00:02:03.018 ++ NAME='Fedora Linux' 00:02:03.018 ++ VERSION='39 (Cloud Edition)' 00:02:03.018 ++ ID=fedora 00:02:03.018 ++ VERSION_ID=39 00:02:03.018 ++ VERSION_CODENAME= 00:02:03.018 ++ PLATFORM_ID=platform:f39 00:02:03.018 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:03.018 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.018 ++ LOGO=fedora-logo-icon 00:02:03.018 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:03.018 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.018 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:03.018 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.018 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.018 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.018 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:03.018 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.018 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:03.018 ++ SUPPORT_END=2024-11-12 00:02:03.018 ++ VARIANT='Cloud Edition' 00:02:03.018 ++ VARIANT_ID=cloud 00:02:03.018 + uname -a 00:02:03.018 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:03.018 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.018 Hugepages 00:02:03.018 node hugesize free / total 00:02:03.018 node0 1048576kB 0 / 0 00:02:03.018 node0 2048kB 0 / 0 00:02:03.018 00:02:03.018 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.018 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.018 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:03.278 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:03.278 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:03.278 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:03.278 + rm -f /tmp/spdk-ld-path 00:02:03.278 + source autorun-spdk.conf 00:02:03.278 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.278 ++ SPDK_TEST_NVME=1 00:02:03.278 ++ SPDK_TEST_FTL=1 00:02:03.278 ++ SPDK_TEST_ISAL=1 00:02:03.278 ++ SPDK_RUN_ASAN=1 00:02:03.278 ++ SPDK_RUN_UBSAN=1 00:02:03.278 ++ SPDK_TEST_XNVME=1 00:02:03.278 ++ SPDK_TEST_NVME_FDP=1 00:02:03.278 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.278 ++ RUN_NIGHTLY=1 00:02:03.278 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:03.278 + [[ -n '' ]] 00:02:03.278 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:03.278 + for M in /var/spdk/build-*-manifest.txt 00:02:03.278 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:03.278 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.278 + for M in /var/spdk/build-*-manifest.txt 00:02:03.278 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:03.278 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.278 + for M in /var/spdk/build-*-manifest.txt 00:02:03.278 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:03.278 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.278 ++ uname 00:02:03.278 + [[ Linux == \L\i\n\u\x ]] 00:02:03.278 + sudo dmesg -T 00:02:03.278 + sudo dmesg --clear 00:02:03.278 + dmesg_pid=5256 00:02:03.278 + sudo dmesg -Tw 00:02:03.278 + [[ Fedora Linux == FreeBSD ]] 00:02:03.278 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.278 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.278 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:03.278 + [[ -x /usr/src/fio-static/fio ]] 00:02:03.278 + export FIO_BIN=/usr/src/fio-static/fio 00:02:03.278 + FIO_BIN=/usr/src/fio-static/fio 00:02:03.278 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:03.278 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:03.278 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:03.278 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.278 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.278 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:03.278 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.278 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.278 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.278 Test configuration: 00:02:03.278 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.278 SPDK_TEST_NVME=1 00:02:03.278 SPDK_TEST_FTL=1 00:02:03.278 SPDK_TEST_ISAL=1 00:02:03.278 SPDK_RUN_ASAN=1 00:02:03.278 SPDK_RUN_UBSAN=1 00:02:03.278 SPDK_TEST_XNVME=1 00:02:03.278 SPDK_TEST_NVME_FDP=1 00:02:03.278 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.538 RUN_NIGHTLY=1 20:48:24 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:03.538 20:48:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:03.538 20:48:24 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.538 20:48:24 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.538 20:48:24 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.538 20:48:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.538 20:48:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.538 20:48:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.538 20:48:24 -- paths/export.sh@5 -- $ export PATH 00:02:03.538 20:48:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.538 20:48:24 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:03.538 20:48:24 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:03.538 20:48:24 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733690904.XXXXXX 00:02:03.538 20:48:24 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733690904.Nmt0pU 00:02:03.538 20:48:24 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:03.538 20:48:24 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:03.538 20:48:24 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:03.538 20:48:24 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:03.538 20:48:24 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.538 20:48:24 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:03.538 20:48:24 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:03.538 20:48:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.538 20:48:24 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:03.538 20:48:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.538 20:48:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.538 20:48:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.538 20:48:24 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.538 Sun Dec 8 08:48:24 PM UTC 2024 00:02:03.538 20:48:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.538 LTS-67-gc13c99a5e 00:02:03.538 20:48:24 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:03.538 20:48:24 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:03.538 20:48:24 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:03.538 20:48:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:03.538 20:48:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.538 ************************************ 00:02:03.538 START TEST asan 00:02:03.538 ************************************ 00:02:03.538 using asan 00:02:03.538 20:48:24 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:03.538 00:02:03.538 real 0m0.000s 00:02:03.538 user 0m0.000s 00:02:03.538 sys 0m0.000s 00:02:03.538 20:48:24 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:03.538 ************************************ 00:02:03.538 END TEST asan 00:02:03.538 20:48:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.538 ************************************ 00:02:03.538 20:48:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.538 20:48:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.538 20:48:24 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:03.538 20:48:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:03.538 20:48:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.538 ************************************ 00:02:03.538 START TEST ubsan 00:02:03.538 ************************************ 00:02:03.538 using ubsan 00:02:03.538 20:48:24 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:03.538 00:02:03.538 real 0m0.000s 00:02:03.538 user 0m0.000s 00:02:03.538 sys 0m0.000s 00:02:03.538 20:48:24 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:03.538 20:48:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.538 ************************************ 00:02:03.538 END TEST ubsan 00:02:03.538 ************************************ 00:02:03.538 20:48:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:03.538 20:48:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.538 20:48:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.538 20:48:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.538 20:48:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.538 20:48:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.538 20:48:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.538 20:48:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.538 20:48:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:03.798 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:03.798 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.366 Using 'verbs' RDMA provider 00:02:19.817 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:32.138 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:32.138 Creating mk/config.mk...done. 00:02:32.138 Creating mk/cc.flags.mk...done. 00:02:32.138 Type 'make' to build. 00:02:32.138 20:48:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:32.138 20:48:52 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:32.138 20:48:52 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:32.138 20:48:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.138 ************************************ 00:02:32.138 START TEST make 00:02:32.138 ************************************ 00:02:32.138 20:48:52 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:32.138 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:32.138 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:32.138 meson setup builddir \ 00:02:32.138 -Dwith-libaio=enabled \ 00:02:32.138 -Dwith-liburing=enabled \ 00:02:32.138 -Dwith-libvfn=disabled \ 00:02:32.138 -Dwith-spdk=false && \ 00:02:32.138 meson compile -C builddir && \ 00:02:32.138 cd -) 00:02:32.138 make[1]: Nothing to be done for 'all'. 00:02:34.670 The Meson build system 00:02:34.670 Version: 1.5.0 00:02:34.670 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:34.670 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:34.670 Build type: native build 00:02:34.670 Project name: xnvme 00:02:34.670 Project version: 0.7.3 00:02:34.670 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.670 C linker for the host machine: cc ld.bfd 2.40-14 00:02:34.670 Host machine cpu family: x86_64 00:02:34.670 Host machine cpu: x86_64 00:02:34.670 Message: host_machine.system: linux 00:02:34.670 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:34.670 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:34.670 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:34.670 Run-time dependency threads found: YES 00:02:34.670 Has header "setupapi.h" : NO 00:02:34.670 Has header "linux/blkzoned.h" : YES 00:02:34.670 Has header "linux/blkzoned.h" : YES (cached) 00:02:34.670 Has header "libaio.h" : YES 00:02:34.670 Library aio found: YES 00:02:34.670 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.670 Run-time dependency liburing found: YES 2.2 00:02:34.670 Dependency libvfn skipped: feature with-libvfn disabled 00:02:34.670 Run-time dependency appleframeworks found: NO (tried framework) 00:02:34.670 Run-time dependency appleframeworks found: NO (tried framework) 00:02:34.670 Configuring xnvme_config.h using configuration 00:02:34.670 Configuring xnvme.spec using configuration 00:02:34.670 Run-time dependency bash-completion found: YES 2.11 00:02:34.670 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:34.670 Program cp found: YES (/usr/bin/cp) 00:02:34.670 Has header "winsock2.h" : NO 00:02:34.670 Has header "dbghelp.h" : NO 00:02:34.670 Library rpcrt4 found: NO 00:02:34.670 Library rt found: YES 00:02:34.670 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:34.670 Found CMake: /usr/bin/cmake (3.27.7) 00:02:34.670 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:34.670 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:34.670 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:34.670 Build targets in project: 32 00:02:34.670 00:02:34.670 xnvme 0.7.3 00:02:34.670 00:02:34.670 User defined options 00:02:34.670 with-libaio : enabled 00:02:34.670 with-liburing: enabled 00:02:34.670 with-libvfn : disabled 00:02:34.670 with-spdk : false 00:02:34.670 00:02:34.670 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.237 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:35.237 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:35.237 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:35.237 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:35.237 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:35.237 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:35.237 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:35.237 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:35.237 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:35.237 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:35.237 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:35.237 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:35.495 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:35.495 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:35.495 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:35.495 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:35.495 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:35.495 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:35.495 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:35.495 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:35.495 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:35.495 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:35.495 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:35.495 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:35.495 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:35.495 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:35.495 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:35.495 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:35.495 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:35.495 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:35.495 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:35.495 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:35.495 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:35.495 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:35.495 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:35.753 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:35.753 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:35.753 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:35.753 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:35.753 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:35.753 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:35.753 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:35.753 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:35.753 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:35.753 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:35.753 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:35.753 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:35.753 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:35.753 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:35.753 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:35.753 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:35.753 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:35.753 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:35.753 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:35.753 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:35.753 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:35.753 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:35.753 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:35.753 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:35.753 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:35.753 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:36.011 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:36.011 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:36.011 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:36.011 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:36.011 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:36.011 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:36.011 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:36.011 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:36.011 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:36.011 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:36.011 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:36.011 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:36.011 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:36.011 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:36.011 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:36.011 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:36.011 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:36.011 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:36.269 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:36.269 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:36.269 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:36.269 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:36.269 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:36.269 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:36.269 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:36.269 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:36.269 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:36.269 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:36.269 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:36.269 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:36.269 [91/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:36.269 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:36.269 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:36.527 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:36.527 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:36.527 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:36.527 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:36.527 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:36.527 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:36.527 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:36.527 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:36.527 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:36.527 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:36.527 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:36.527 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:36.527 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:36.527 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:36.527 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:36.527 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:36.527 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:36.527 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:36.527 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:36.527 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:36.527 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:36.527 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:36.527 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:36.527 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:36.527 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:36.527 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:36.527 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:36.527 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:36.527 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:36.527 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:36.786 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:36.786 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:36.786 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:36.786 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:36.786 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:36.786 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:36.786 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:36.786 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:36.786 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:36.786 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:36.786 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:36.786 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:36.786 [136/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:36.786 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:36.786 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:36.786 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:36.786 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:36.786 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:36.786 [142/203] Linking target lib/libxnvme.so 00:02:37.045 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:37.045 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:37.045 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:37.045 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:37.045 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:37.045 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:37.045 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:37.045 [150/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:37.045 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:37.045 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:37.045 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:37.045 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:37.045 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:37.304 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:37.304 [157/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:37.304 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:37.304 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:37.304 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:37.304 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:37.304 [162/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:37.304 [163/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:37.304 [164/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:37.304 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:37.304 [166/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:37.304 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:37.304 [168/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:37.562 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:37.562 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:37.562 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:37.562 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:37.562 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:37.562 [174/203] Linking static target lib/libxnvme.a 00:02:37.562 [175/203] Linking target tests/xnvme_tests_scc 00:02:37.563 [176/203] Linking target tests/xnvme_tests_ioworker 00:02:37.563 [177/203] Linking target tests/xnvme_tests_lblk 00:02:37.563 [178/203] Linking target tests/xnvme_tests_async_intf 00:02:37.563 [179/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:37.563 [180/203] Linking target tests/xnvme_tests_buf 00:02:37.822 [181/203] Linking target tests/xnvme_tests_xnvme_file 00:02:37.822 [182/203] Linking target tests/xnvme_tests_znd_append 00:02:37.822 [183/203] Linking target tests/xnvme_tests_znd_state 00:02:37.822 [184/203] Linking target tests/xnvme_tests_enum 00:02:37.822 [185/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:37.822 [186/203] Linking target tests/xnvme_tests_cli 00:02:37.822 [187/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:37.822 [188/203] Linking target tests/xnvme_tests_map 00:02:37.822 [189/203] Linking target tests/xnvme_tests_kvs 00:02:37.822 [190/203] Linking target tools/lblk 00:02:37.822 [191/203] Linking target examples/xnvme_dev 00:02:37.822 [192/203] Linking target tools/xdd 00:02:37.822 [193/203] Linking target tools/kvs 00:02:37.822 [194/203] Linking target examples/xnvme_enum 00:02:37.822 [195/203] Linking target examples/xnvme_hello 00:02:37.822 [196/203] Linking target tools/xnvme_file 00:02:37.822 [197/203] Linking target examples/zoned_io_sync 00:02:37.822 [198/203] Linking target tools/xnvme 00:02:37.822 [199/203] Linking target examples/xnvme_single_sync 00:02:37.822 [200/203] Linking target examples/xnvme_single_async 00:02:37.822 [201/203] Linking target examples/xnvme_io_async 00:02:37.822 [202/203] Linking target examples/zoned_io_async 00:02:37.822 [203/203] Linking target tools/zoned 00:02:37.822 INFO: autodetecting backend as ninja 00:02:37.822 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:37.822 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:44.383 The Meson build system 00:02:44.383 Version: 1.5.0 00:02:44.383 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:44.383 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:44.383 Build type: native build 00:02:44.383 Program cat found: YES (/usr/bin/cat) 00:02:44.383 Project name: DPDK 00:02:44.383 Project version: 23.11.0 00:02:44.383 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:44.383 C linker for the host machine: cc ld.bfd 2.40-14 00:02:44.383 Host machine cpu family: x86_64 00:02:44.383 Host machine cpu: x86_64 00:02:44.383 Message: ## Building in Developer Mode ## 00:02:44.383 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:44.383 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:44.383 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:44.383 Program python3 found: YES (/usr/bin/python3) 00:02:44.383 Program cat found: YES (/usr/bin/cat) 00:02:44.383 Compiler for C supports arguments -march=native: YES 00:02:44.383 Checking for size of "void *" : 8 00:02:44.383 Checking for size of "void *" : 8 (cached) 00:02:44.383 Library m found: YES 00:02:44.383 Library numa found: YES 00:02:44.383 Has header "numaif.h" : YES 00:02:44.383 Library fdt found: NO 00:02:44.383 Library execinfo found: NO 00:02:44.383 Has header "execinfo.h" : YES 00:02:44.383 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:44.383 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:44.383 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:44.383 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:44.383 Run-time dependency openssl found: YES 3.1.1 00:02:44.383 Run-time dependency libpcap found: YES 1.10.4 00:02:44.383 Has header "pcap.h" with dependency libpcap: YES 00:02:44.383 Compiler for C supports arguments -Wcast-qual: YES 00:02:44.383 Compiler for C supports arguments -Wdeprecated: YES 00:02:44.383 Compiler for C supports arguments -Wformat: YES 00:02:44.383 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:44.383 Compiler for C supports arguments -Wformat-security: NO 00:02:44.383 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:44.383 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:44.383 Compiler for C supports arguments -Wnested-externs: YES 00:02:44.383 Compiler for C supports arguments -Wold-style-definition: YES 00:02:44.383 Compiler for C supports arguments -Wpointer-arith: YES 00:02:44.383 Compiler for C supports arguments -Wsign-compare: YES 00:02:44.383 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:44.383 Compiler for C supports arguments -Wundef: YES 00:02:44.383 Compiler for C supports arguments -Wwrite-strings: YES 00:02:44.383 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:44.383 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:44.383 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:44.383 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:44.383 Program objdump found: YES (/usr/bin/objdump) 00:02:44.383 Compiler for C supports arguments -mavx512f: YES 00:02:44.383 Checking if "AVX512 checking" compiles: YES 00:02:44.383 Fetching value of define "__SSE4_2__" : 1 00:02:44.383 Fetching value of define "__AES__" : 1 00:02:44.383 Fetching value of define "__AVX__" : 1 00:02:44.383 Fetching value of define "__AVX2__" : 1 00:02:44.383 Fetching value of define "__AVX512BW__" : (undefined) 00:02:44.383 Fetching value of define "__AVX512CD__" : (undefined) 00:02:44.383 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:44.383 Fetching value of define "__AVX512F__" : (undefined) 00:02:44.383 Fetching value of define "__AVX512VL__" : (undefined) 00:02:44.383 Fetching value of define "__PCLMUL__" : 1 00:02:44.383 Fetching value of define "__RDRND__" : 1 00:02:44.383 Fetching value of define "__RDSEED__" : 1 00:02:44.383 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:44.383 Fetching value of define "__znver1__" : (undefined) 00:02:44.383 Fetching value of define "__znver2__" : (undefined) 00:02:44.383 Fetching value of define "__znver3__" : (undefined) 00:02:44.383 Fetching value of define "__znver4__" : (undefined) 00:02:44.383 Library asan found: YES 00:02:44.383 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:44.383 Message: lib/log: Defining dependency "log" 00:02:44.383 Message: lib/kvargs: Defining dependency "kvargs" 00:02:44.383 Message: lib/telemetry: Defining dependency "telemetry" 00:02:44.383 Library rt found: YES 00:02:44.383 Checking for function "getentropy" : NO 00:02:44.383 Message: lib/eal: Defining dependency "eal" 00:02:44.383 Message: lib/ring: Defining dependency "ring" 00:02:44.383 Message: lib/rcu: Defining dependency "rcu" 00:02:44.383 Message: lib/mempool: Defining dependency "mempool" 00:02:44.383 Message: lib/mbuf: Defining dependency "mbuf" 00:02:44.383 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:44.383 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:44.383 Compiler for C supports arguments -mpclmul: YES 00:02:44.383 Compiler for C supports arguments -maes: YES 00:02:44.383 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:44.383 Compiler for C supports arguments -mavx512bw: YES 00:02:44.383 Compiler for C supports arguments -mavx512dq: YES 00:02:44.383 Compiler for C supports arguments -mavx512vl: YES 00:02:44.383 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:44.383 Compiler for C supports arguments -mavx2: YES 00:02:44.383 Compiler for C supports arguments -mavx: YES 00:02:44.383 Message: lib/net: Defining dependency "net" 00:02:44.383 Message: lib/meter: Defining dependency "meter" 00:02:44.383 Message: lib/ethdev: Defining dependency "ethdev" 00:02:44.383 Message: lib/pci: Defining dependency "pci" 00:02:44.383 Message: lib/cmdline: Defining dependency "cmdline" 00:02:44.383 Message: lib/hash: Defining dependency "hash" 00:02:44.383 Message: lib/timer: Defining dependency "timer" 00:02:44.384 Message: lib/compressdev: Defining dependency "compressdev" 00:02:44.384 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:44.384 Message: lib/dmadev: Defining dependency "dmadev" 00:02:44.384 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:44.384 Message: lib/power: Defining dependency "power" 00:02:44.384 Message: lib/reorder: Defining dependency "reorder" 00:02:44.384 Message: lib/security: Defining dependency "security" 00:02:44.384 Has header "linux/userfaultfd.h" : YES 00:02:44.384 Has header "linux/vduse.h" : YES 00:02:44.384 Message: lib/vhost: Defining dependency "vhost" 00:02:44.384 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:44.384 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:44.384 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:44.384 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:44.384 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:44.384 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:44.384 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:44.384 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:44.384 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:44.384 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:44.384 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:44.384 Configuring doxy-api-html.conf using configuration 00:02:44.384 Configuring doxy-api-man.conf using configuration 00:02:44.384 Program mandb found: YES (/usr/bin/mandb) 00:02:44.384 Program sphinx-build found: NO 00:02:44.384 Configuring rte_build_config.h using configuration 00:02:44.384 Message: 00:02:44.384 ================= 00:02:44.384 Applications Enabled 00:02:44.384 ================= 00:02:44.384 00:02:44.384 apps: 00:02:44.384 00:02:44.384 00:02:44.384 Message: 00:02:44.384 ================= 00:02:44.384 Libraries Enabled 00:02:44.384 ================= 00:02:44.384 00:02:44.384 libs: 00:02:44.384 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:44.384 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:44.384 cryptodev, dmadev, power, reorder, security, vhost, 00:02:44.384 00:02:44.384 Message: 00:02:44.384 =============== 00:02:44.384 Drivers Enabled 00:02:44.384 =============== 00:02:44.384 00:02:44.384 common: 00:02:44.384 00:02:44.384 bus: 00:02:44.384 pci, vdev, 00:02:44.384 mempool: 00:02:44.384 ring, 00:02:44.384 dma: 00:02:44.384 00:02:44.384 net: 00:02:44.384 00:02:44.384 crypto: 00:02:44.384 00:02:44.384 compress: 00:02:44.384 00:02:44.384 vdpa: 00:02:44.384 00:02:44.384 00:02:44.384 Message: 00:02:44.384 ================= 00:02:44.384 Content Skipped 00:02:44.384 ================= 00:02:44.384 00:02:44.384 apps: 00:02:44.384 dumpcap: explicitly disabled via build config 00:02:44.384 graph: explicitly disabled via build config 00:02:44.384 pdump: explicitly disabled via build config 00:02:44.384 proc-info: explicitly disabled via build config 00:02:44.384 test-acl: explicitly disabled via build config 00:02:44.384 test-bbdev: explicitly disabled via build config 00:02:44.384 test-cmdline: explicitly disabled via build config 00:02:44.384 test-compress-perf: explicitly disabled via build config 00:02:44.384 test-crypto-perf: explicitly disabled via build config 00:02:44.384 test-dma-perf: explicitly disabled via build config 00:02:44.384 test-eventdev: explicitly disabled via build config 00:02:44.384 test-fib: explicitly disabled via build config 00:02:44.384 test-flow-perf: explicitly disabled via build config 00:02:44.384 test-gpudev: explicitly disabled via build config 00:02:44.384 test-mldev: explicitly disabled via build config 00:02:44.384 test-pipeline: explicitly disabled via build config 00:02:44.384 test-pmd: explicitly disabled via build config 00:02:44.384 test-regex: explicitly disabled via build config 00:02:44.384 test-sad: explicitly disabled via build config 00:02:44.384 test-security-perf: explicitly disabled via build config 00:02:44.384 00:02:44.384 libs: 00:02:44.384 metrics: explicitly disabled via build config 00:02:44.384 acl: explicitly disabled via build config 00:02:44.384 bbdev: explicitly disabled via build config 00:02:44.384 bitratestats: explicitly disabled via build config 00:02:44.384 bpf: explicitly disabled via build config 00:02:44.384 cfgfile: explicitly disabled via build config 00:02:44.384 distributor: explicitly disabled via build config 00:02:44.384 efd: explicitly disabled via build config 00:02:44.384 eventdev: explicitly disabled via build config 00:02:44.384 dispatcher: explicitly disabled via build config 00:02:44.384 gpudev: explicitly disabled via build config 00:02:44.384 gro: explicitly disabled via build config 00:02:44.384 gso: explicitly disabled via build config 00:02:44.384 ip_frag: explicitly disabled via build config 00:02:44.384 jobstats: explicitly disabled via build config 00:02:44.384 latencystats: explicitly disabled via build config 00:02:44.384 lpm: explicitly disabled via build config 00:02:44.384 member: explicitly disabled via build config 00:02:44.384 pcapng: explicitly disabled via build config 00:02:44.384 rawdev: explicitly disabled via build config 00:02:44.384 regexdev: explicitly disabled via build config 00:02:44.384 mldev: explicitly disabled via build config 00:02:44.384 rib: explicitly disabled via build config 00:02:44.384 sched: explicitly disabled via build config 00:02:44.384 stack: explicitly disabled via build config 00:02:44.384 ipsec: explicitly disabled via build config 00:02:44.384 pdcp: explicitly disabled via build config 00:02:44.384 fib: explicitly disabled via build config 00:02:44.384 port: explicitly disabled via build config 00:02:44.384 pdump: explicitly disabled via build config 00:02:44.384 table: explicitly disabled via build config 00:02:44.384 pipeline: explicitly disabled via build config 00:02:44.384 graph: explicitly disabled via build config 00:02:44.384 node: explicitly disabled via build config 00:02:44.384 00:02:44.384 drivers: 00:02:44.384 common/cpt: not in enabled drivers build config 00:02:44.384 common/dpaax: not in enabled drivers build config 00:02:44.384 common/iavf: not in enabled drivers build config 00:02:44.384 common/idpf: not in enabled drivers build config 00:02:44.384 common/mvep: not in enabled drivers build config 00:02:44.384 common/octeontx: not in enabled drivers build config 00:02:44.384 bus/auxiliary: not in enabled drivers build config 00:02:44.384 bus/cdx: not in enabled drivers build config 00:02:44.384 bus/dpaa: not in enabled drivers build config 00:02:44.384 bus/fslmc: not in enabled drivers build config 00:02:44.384 bus/ifpga: not in enabled drivers build config 00:02:44.384 bus/platform: not in enabled drivers build config 00:02:44.384 bus/vmbus: not in enabled drivers build config 00:02:44.384 common/cnxk: not in enabled drivers build config 00:02:44.384 common/mlx5: not in enabled drivers build config 00:02:44.384 common/nfp: not in enabled drivers build config 00:02:44.384 common/qat: not in enabled drivers build config 00:02:44.384 common/sfc_efx: not in enabled drivers build config 00:02:44.384 mempool/bucket: not in enabled drivers build config 00:02:44.384 mempool/cnxk: not in enabled drivers build config 00:02:44.384 mempool/dpaa: not in enabled drivers build config 00:02:44.384 mempool/dpaa2: not in enabled drivers build config 00:02:44.384 mempool/octeontx: not in enabled drivers build config 00:02:44.384 mempool/stack: not in enabled drivers build config 00:02:44.384 dma/cnxk: not in enabled drivers build config 00:02:44.384 dma/dpaa: not in enabled drivers build config 00:02:44.384 dma/dpaa2: not in enabled drivers build config 00:02:44.384 dma/hisilicon: not in enabled drivers build config 00:02:44.384 dma/idxd: not in enabled drivers build config 00:02:44.384 dma/ioat: not in enabled drivers build config 00:02:44.384 dma/skeleton: not in enabled drivers build config 00:02:44.384 net/af_packet: not in enabled drivers build config 00:02:44.384 net/af_xdp: not in enabled drivers build config 00:02:44.384 net/ark: not in enabled drivers build config 00:02:44.384 net/atlantic: not in enabled drivers build config 00:02:44.384 net/avp: not in enabled drivers build config 00:02:44.384 net/axgbe: not in enabled drivers build config 00:02:44.384 net/bnx2x: not in enabled drivers build config 00:02:44.384 net/bnxt: not in enabled drivers build config 00:02:44.384 net/bonding: not in enabled drivers build config 00:02:44.384 net/cnxk: not in enabled drivers build config 00:02:44.384 net/cpfl: not in enabled drivers build config 00:02:44.384 net/cxgbe: not in enabled drivers build config 00:02:44.384 net/dpaa: not in enabled drivers build config 00:02:44.384 net/dpaa2: not in enabled drivers build config 00:02:44.384 net/e1000: not in enabled drivers build config 00:02:44.384 net/ena: not in enabled drivers build config 00:02:44.384 net/enetc: not in enabled drivers build config 00:02:44.384 net/enetfec: not in enabled drivers build config 00:02:44.384 net/enic: not in enabled drivers build config 00:02:44.384 net/failsafe: not in enabled drivers build config 00:02:44.384 net/fm10k: not in enabled drivers build config 00:02:44.384 net/gve: not in enabled drivers build config 00:02:44.384 net/hinic: not in enabled drivers build config 00:02:44.384 net/hns3: not in enabled drivers build config 00:02:44.384 net/i40e: not in enabled drivers build config 00:02:44.384 net/iavf: not in enabled drivers build config 00:02:44.384 net/ice: not in enabled drivers build config 00:02:44.384 net/idpf: not in enabled drivers build config 00:02:44.384 net/igc: not in enabled drivers build config 00:02:44.384 net/ionic: not in enabled drivers build config 00:02:44.384 net/ipn3ke: not in enabled drivers build config 00:02:44.384 net/ixgbe: not in enabled drivers build config 00:02:44.384 net/mana: not in enabled drivers build config 00:02:44.384 net/memif: not in enabled drivers build config 00:02:44.384 net/mlx4: not in enabled drivers build config 00:02:44.384 net/mlx5: not in enabled drivers build config 00:02:44.384 net/mvneta: not in enabled drivers build config 00:02:44.384 net/mvpp2: not in enabled drivers build config 00:02:44.384 net/netvsc: not in enabled drivers build config 00:02:44.384 net/nfb: not in enabled drivers build config 00:02:44.385 net/nfp: not in enabled drivers build config 00:02:44.385 net/ngbe: not in enabled drivers build config 00:02:44.385 net/null: not in enabled drivers build config 00:02:44.385 net/octeontx: not in enabled drivers build config 00:02:44.385 net/octeon_ep: not in enabled drivers build config 00:02:44.385 net/pcap: not in enabled drivers build config 00:02:44.385 net/pfe: not in enabled drivers build config 00:02:44.385 net/qede: not in enabled drivers build config 00:02:44.385 net/ring: not in enabled drivers build config 00:02:44.385 net/sfc: not in enabled drivers build config 00:02:44.385 net/softnic: not in enabled drivers build config 00:02:44.385 net/tap: not in enabled drivers build config 00:02:44.385 net/thunderx: not in enabled drivers build config 00:02:44.385 net/txgbe: not in enabled drivers build config 00:02:44.385 net/vdev_netvsc: not in enabled drivers build config 00:02:44.385 net/vhost: not in enabled drivers build config 00:02:44.385 net/virtio: not in enabled drivers build config 00:02:44.385 net/vmxnet3: not in enabled drivers build config 00:02:44.385 raw/*: missing internal dependency, "rawdev" 00:02:44.385 crypto/armv8: not in enabled drivers build config 00:02:44.385 crypto/bcmfs: not in enabled drivers build config 00:02:44.385 crypto/caam_jr: not in enabled drivers build config 00:02:44.385 crypto/ccp: not in enabled drivers build config 00:02:44.385 crypto/cnxk: not in enabled drivers build config 00:02:44.385 crypto/dpaa_sec: not in enabled drivers build config 00:02:44.385 crypto/dpaa2_sec: not in enabled drivers build config 00:02:44.385 crypto/ipsec_mb: not in enabled drivers build config 00:02:44.385 crypto/mlx5: not in enabled drivers build config 00:02:44.385 crypto/mvsam: not in enabled drivers build config 00:02:44.385 crypto/nitrox: not in enabled drivers build config 00:02:44.385 crypto/null: not in enabled drivers build config 00:02:44.385 crypto/octeontx: not in enabled drivers build config 00:02:44.385 crypto/openssl: not in enabled drivers build config 00:02:44.385 crypto/scheduler: not in enabled drivers build config 00:02:44.385 crypto/uadk: not in enabled drivers build config 00:02:44.385 crypto/virtio: not in enabled drivers build config 00:02:44.385 compress/isal: not in enabled drivers build config 00:02:44.385 compress/mlx5: not in enabled drivers build config 00:02:44.385 compress/octeontx: not in enabled drivers build config 00:02:44.385 compress/zlib: not in enabled drivers build config 00:02:44.385 regex/*: missing internal dependency, "regexdev" 00:02:44.385 ml/*: missing internal dependency, "mldev" 00:02:44.385 vdpa/ifc: not in enabled drivers build config 00:02:44.385 vdpa/mlx5: not in enabled drivers build config 00:02:44.385 vdpa/nfp: not in enabled drivers build config 00:02:44.385 vdpa/sfc: not in enabled drivers build config 00:02:44.385 event/*: missing internal dependency, "eventdev" 00:02:44.385 baseband/*: missing internal dependency, "bbdev" 00:02:44.385 gpu/*: missing internal dependency, "gpudev" 00:02:44.385 00:02:44.385 00:02:44.385 Build targets in project: 85 00:02:44.385 00:02:44.385 DPDK 23.11.0 00:02:44.385 00:02:44.385 User defined options 00:02:44.385 buildtype : debug 00:02:44.385 default_library : shared 00:02:44.385 libdir : lib 00:02:44.385 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:44.385 b_sanitize : address 00:02:44.385 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:44.385 c_link_args : 00:02:44.385 cpu_instruction_set: native 00:02:44.385 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:44.385 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:44.385 enable_docs : false 00:02:44.385 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:44.385 enable_kmods : false 00:02:44.385 tests : false 00:02:44.385 00:02:44.385 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.643 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:44.902 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:44.902 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.902 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.902 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.902 [5/265] Linking static target lib/librte_kvargs.a 00:02:44.902 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:44.902 [7/265] Linking static target lib/librte_log.a 00:02:44.902 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.902 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:45.160 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:45.419 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.677 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:45.677 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:45.677 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:45.935 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:45.935 [16/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.935 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:45.935 [18/265] Linking target lib/librte_log.so.24.0 00:02:45.935 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:45.935 [20/265] Linking static target lib/librte_telemetry.a 00:02:46.194 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:46.194 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:46.194 [23/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:46.194 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:46.194 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:46.194 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:46.452 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:46.710 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.710 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.710 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.710 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.968 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:46.968 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:46.968 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:46.968 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:47.226 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:47.226 [37/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:47.226 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.227 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:47.227 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.227 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.227 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.227 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.484 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.743 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.743 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.743 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:48.001 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:48.001 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:48.259 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:48.259 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:48.517 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:48.518 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:48.518 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:48.518 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:48.518 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:48.518 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:48.775 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.775 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:49.033 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:49.033 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:49.033 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:49.033 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:49.033 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:49.033 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:49.290 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:49.290 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:49.290 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:49.856 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:49.856 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:49.856 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:49.856 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:49.856 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:49.856 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:50.114 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:50.114 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:50.114 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:50.114 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:50.372 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:50.372 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:50.372 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:50.629 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:50.888 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:50.888 [84/265] Linking static target lib/librte_ring.a 00:02:50.888 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:50.888 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:50.888 [87/265] Linking static target lib/librte_eal.a 00:02:51.146 [88/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.405 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:51.405 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:51.405 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:51.405 [92/265] Linking static target lib/librte_rcu.a 00:02:51.405 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:51.405 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:51.405 [95/265] Linking static target lib/librte_mempool.a 00:02:51.663 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:51.663 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:51.921 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.180 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:52.180 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:52.438 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:52.438 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:52.438 [103/265] Linking static target lib/librte_mbuf.a 00:02:52.438 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:52.438 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:52.438 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:52.438 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:52.696 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:52.697 [109/265] Linking static target lib/librte_net.a 00:02:52.697 [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.697 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:52.697 [112/265] Linking static target lib/librte_meter.a 00:02:53.263 [113/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.263 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:53.263 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:53.263 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:53.263 [117/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.263 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:53.522 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.088 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:54.346 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:54.346 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:54.346 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:54.346 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:54.346 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:54.603 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:54.603 [127/265] Linking static target lib/librte_pci.a 00:02:54.603 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:54.603 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:54.603 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:54.603 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:54.603 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:54.862 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:54.862 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:54.862 [135/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.862 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:54.862 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:54.862 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:55.120 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:55.120 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:55.120 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:55.120 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:55.120 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:55.120 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:55.377 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:55.377 [146/265] Linking static target lib/librte_cmdline.a 00:02:55.634 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:55.634 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:55.634 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:55.891 [150/265] Linking static target lib/librte_ethdev.a 00:02:55.891 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:55.891 [152/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:55.891 [153/265] Linking static target lib/librte_timer.a 00:02:56.453 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:56.453 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:56.453 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:56.453 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:56.453 [158/265] Linking static target lib/librte_hash.a 00:02:56.453 [159/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:56.453 [160/265] Linking static target lib/librte_compressdev.a 00:02:56.453 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:56.710 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.968 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:56.968 [164/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.226 [165/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:57.226 [166/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:57.226 [167/265] Linking static target lib/librte_dmadev.a 00:02:57.226 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:57.226 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:57.226 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:57.483 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.483 [172/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.741 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:57.741 [174/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:57.741 [175/265] Linking static target lib/librte_cryptodev.a 00:02:57.741 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.999 [177/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:57.999 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:57.999 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:57.999 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:57.999 [181/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:58.257 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:58.257 [183/265] Linking static target lib/librte_power.a 00:02:58.516 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:58.516 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:58.516 [186/265] Linking static target lib/librte_reorder.a 00:02:58.773 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:58.773 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:58.773 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:58.773 [190/265] Linking static target lib/librte_security.a 00:02:59.030 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.287 [192/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.287 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:59.544 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.544 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:59.802 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:59.802 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:59.802 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.802 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:59.802 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:59.802 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:00.061 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:00.319 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:00.319 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:00.319 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:00.319 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:00.578 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:00.578 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:00.578 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:00.578 [210/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.578 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.578 [212/265] Linking static target drivers/librte_bus_vdev.a 00:03:00.836 [213/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:00.836 [214/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:00.836 [215/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:00.836 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.836 [217/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.836 [218/265] Linking static target drivers/librte_bus_pci.a 00:03:00.836 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.836 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:00.836 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.836 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.095 [223/265] Linking static target drivers/librte_mempool_ring.a 00:03:01.354 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.922 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.922 [226/265] Linking target lib/librte_eal.so.24.0 00:03:02.181 [227/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:02.181 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:02.181 [229/265] Linking target lib/librte_dmadev.so.24.0 00:03:02.181 [230/265] Linking target lib/librte_meter.so.24.0 00:03:02.181 [231/265] Linking target lib/librte_pci.so.24.0 00:03:02.181 [232/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:02.181 [233/265] Linking target lib/librte_ring.so.24.0 00:03:02.181 [234/265] Linking target lib/librte_timer.so.24.0 00:03:02.477 [235/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:02.477 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:02.477 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:02.477 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:02.477 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:02.477 [240/265] Linking target lib/librte_mempool.so.24.0 00:03:02.477 [241/265] Linking target lib/librte_rcu.so.24.0 00:03:02.477 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:02.762 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:02.762 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:02.762 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:02.762 [246/265] Linking target lib/librte_mbuf.so.24.0 00:03:02.762 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:02.762 [248/265] Linking target lib/librte_compressdev.so.24.0 00:03:02.762 [249/265] Linking target lib/librte_reorder.so.24.0 00:03:02.762 [250/265] Linking target lib/librte_net.so.24.0 00:03:03.021 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:03:03.021 [252/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.021 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:03.021 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:03.021 [255/265] Linking target lib/librte_hash.so.24.0 00:03:03.021 [256/265] Linking target lib/librte_cmdline.so.24.0 00:03:03.021 [257/265] Linking target lib/librte_security.so.24.0 00:03:03.021 [258/265] Linking target lib/librte_ethdev.so.24.0 00:03:03.280 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:03.280 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:03.280 [261/265] Linking target lib/librte_power.so.24.0 00:03:05.185 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:05.185 [263/265] Linking static target lib/librte_vhost.a 00:03:07.089 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.349 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:07.349 INFO: autodetecting backend as ninja 00:03:07.349 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:08.291 CC lib/ut/ut.o 00:03:08.291 CC lib/log/log.o 00:03:08.291 CC lib/log/log_flags.o 00:03:08.291 CC lib/log/log_deprecated.o 00:03:08.291 CC lib/ut_mock/mock.o 00:03:08.550 LIB libspdk_ut_mock.a 00:03:08.550 SO libspdk_ut_mock.so.5.0 00:03:08.550 LIB libspdk_log.a 00:03:08.550 LIB libspdk_ut.a 00:03:08.550 SO libspdk_log.so.6.1 00:03:08.550 SO libspdk_ut.so.1.0 00:03:08.550 SYMLINK libspdk_ut_mock.so 00:03:08.550 SYMLINK libspdk_ut.so 00:03:08.809 SYMLINK libspdk_log.so 00:03:08.809 CC lib/dma/dma.o 00:03:08.809 CC lib/util/base64.o 00:03:08.809 CC lib/util/bit_array.o 00:03:08.809 CXX lib/trace_parser/trace.o 00:03:08.809 CC lib/ioat/ioat.o 00:03:08.809 CC lib/util/crc32.o 00:03:08.809 CC lib/util/cpuset.o 00:03:08.809 CC lib/util/crc16.o 00:03:08.809 CC lib/util/crc32c.o 00:03:08.809 CC lib/vfio_user/host/vfio_user_pci.o 00:03:09.068 CC lib/util/crc32_ieee.o 00:03:09.068 CC lib/vfio_user/host/vfio_user.o 00:03:09.068 CC lib/util/crc64.o 00:03:09.068 CC lib/util/dif.o 00:03:09.068 LIB libspdk_dma.a 00:03:09.068 SO libspdk_dma.so.3.0 00:03:09.068 CC lib/util/fd.o 00:03:09.068 CC lib/util/file.o 00:03:09.068 CC lib/util/hexlify.o 00:03:09.068 SYMLINK libspdk_dma.so 00:03:09.068 LIB libspdk_ioat.a 00:03:09.068 CC lib/util/iov.o 00:03:09.068 CC lib/util/math.o 00:03:09.068 SO libspdk_ioat.so.6.0 00:03:09.328 SYMLINK libspdk_ioat.so 00:03:09.328 CC lib/util/pipe.o 00:03:09.328 CC lib/util/strerror_tls.o 00:03:09.328 CC lib/util/string.o 00:03:09.328 CC lib/util/uuid.o 00:03:09.328 LIB libspdk_vfio_user.a 00:03:09.328 CC lib/util/fd_group.o 00:03:09.328 SO libspdk_vfio_user.so.4.0 00:03:09.328 CC lib/util/xor.o 00:03:09.328 CC lib/util/zipf.o 00:03:09.328 SYMLINK libspdk_vfio_user.so 00:03:09.897 LIB libspdk_util.a 00:03:09.897 SO libspdk_util.so.8.0 00:03:09.897 LIB libspdk_trace_parser.a 00:03:09.897 SYMLINK libspdk_util.so 00:03:09.897 SO libspdk_trace_parser.so.4.0 00:03:10.156 CC lib/json/json_parse.o 00:03:10.156 CC lib/json/json_util.o 00:03:10.156 CC lib/json/json_write.o 00:03:10.156 CC lib/vmd/led.o 00:03:10.156 CC lib/vmd/vmd.o 00:03:10.156 CC lib/rdma/common.o 00:03:10.156 CC lib/idxd/idxd.o 00:03:10.156 CC lib/conf/conf.o 00:03:10.156 SYMLINK libspdk_trace_parser.so 00:03:10.156 CC lib/env_dpdk/env.o 00:03:10.156 CC lib/env_dpdk/memory.o 00:03:10.156 CC lib/env_dpdk/pci.o 00:03:10.416 LIB libspdk_conf.a 00:03:10.416 CC lib/rdma/rdma_verbs.o 00:03:10.416 SO libspdk_conf.so.5.0 00:03:10.416 CC lib/idxd/idxd_user.o 00:03:10.416 CC lib/idxd/idxd_kernel.o 00:03:10.416 SYMLINK libspdk_conf.so 00:03:10.416 LIB libspdk_json.a 00:03:10.416 CC lib/env_dpdk/init.o 00:03:10.416 SO libspdk_json.so.5.1 00:03:10.676 SYMLINK libspdk_json.so 00:03:10.676 CC lib/env_dpdk/threads.o 00:03:10.676 LIB libspdk_rdma.a 00:03:10.676 CC lib/env_dpdk/pci_ioat.o 00:03:10.676 SO libspdk_rdma.so.5.0 00:03:10.676 SYMLINK libspdk_rdma.so 00:03:10.676 CC lib/env_dpdk/pci_virtio.o 00:03:10.676 CC lib/env_dpdk/pci_vmd.o 00:03:10.676 CC lib/env_dpdk/pci_idxd.o 00:03:10.676 CC lib/env_dpdk/pci_event.o 00:03:10.676 CC lib/jsonrpc/jsonrpc_server.o 00:03:10.935 CC lib/env_dpdk/sigbus_handler.o 00:03:10.935 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:10.935 CC lib/env_dpdk/pci_dpdk.o 00:03:10.935 LIB libspdk_idxd.a 00:03:10.935 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:10.935 SO libspdk_idxd.so.11.0 00:03:10.935 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:10.935 LIB libspdk_vmd.a 00:03:10.935 SYMLINK libspdk_idxd.so 00:03:10.935 CC lib/jsonrpc/jsonrpc_client.o 00:03:10.935 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:10.935 SO libspdk_vmd.so.5.0 00:03:10.935 SYMLINK libspdk_vmd.so 00:03:11.194 LIB libspdk_jsonrpc.a 00:03:11.194 SO libspdk_jsonrpc.so.5.1 00:03:11.194 SYMLINK libspdk_jsonrpc.so 00:03:11.454 CC lib/rpc/rpc.o 00:03:11.714 LIB libspdk_rpc.a 00:03:11.714 SO libspdk_rpc.so.5.0 00:03:11.714 SYMLINK libspdk_rpc.so 00:03:11.714 LIB libspdk_env_dpdk.a 00:03:11.974 CC lib/notify/notify.o 00:03:11.974 CC lib/notify/notify_rpc.o 00:03:11.974 CC lib/trace/trace.o 00:03:11.974 CC lib/trace/trace_flags.o 00:03:11.974 CC lib/trace/trace_rpc.o 00:03:11.974 CC lib/sock/sock.o 00:03:11.974 CC lib/sock/sock_rpc.o 00:03:11.974 SO libspdk_env_dpdk.so.13.0 00:03:11.974 SYMLINK libspdk_env_dpdk.so 00:03:11.974 LIB libspdk_notify.a 00:03:12.234 SO libspdk_notify.so.5.0 00:03:12.234 SYMLINK libspdk_notify.so 00:03:12.234 LIB libspdk_trace.a 00:03:12.234 SO libspdk_trace.so.9.0 00:03:12.234 SYMLINK libspdk_trace.so 00:03:12.493 LIB libspdk_sock.a 00:03:12.493 SO libspdk_sock.so.8.0 00:03:12.493 CC lib/thread/thread.o 00:03:12.493 CC lib/thread/iobuf.o 00:03:12.493 SYMLINK libspdk_sock.so 00:03:12.752 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:12.752 CC lib/nvme/nvme_ctrlr.o 00:03:12.752 CC lib/nvme/nvme_fabric.o 00:03:12.752 CC lib/nvme/nvme_ns_cmd.o 00:03:12.752 CC lib/nvme/nvme_ns.o 00:03:12.752 CC lib/nvme/nvme_pcie_common.o 00:03:12.752 CC lib/nvme/nvme_pcie.o 00:03:12.752 CC lib/nvme/nvme_qpair.o 00:03:12.752 CC lib/nvme/nvme.o 00:03:13.319 CC lib/nvme/nvme_quirks.o 00:03:13.578 CC lib/nvme/nvme_transport.o 00:03:13.578 CC lib/nvme/nvme_discovery.o 00:03:13.578 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:13.578 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:13.835 CC lib/nvme/nvme_tcp.o 00:03:13.835 CC lib/nvme/nvme_opal.o 00:03:13.835 CC lib/nvme/nvme_io_msg.o 00:03:14.093 CC lib/nvme/nvme_poll_group.o 00:03:14.093 CC lib/nvme/nvme_zns.o 00:03:14.093 CC lib/nvme/nvme_cuse.o 00:03:14.350 CC lib/nvme/nvme_vfio_user.o 00:03:14.350 CC lib/nvme/nvme_rdma.o 00:03:14.350 LIB libspdk_thread.a 00:03:14.608 SO libspdk_thread.so.9.0 00:03:14.608 SYMLINK libspdk_thread.so 00:03:14.608 CC lib/accel/accel.o 00:03:14.608 CC lib/blob/blobstore.o 00:03:14.608 CC lib/init/json_config.o 00:03:14.867 CC lib/init/subsystem.o 00:03:14.867 CC lib/virtio/virtio.o 00:03:14.867 CC lib/virtio/virtio_vhost_user.o 00:03:15.125 CC lib/init/subsystem_rpc.o 00:03:15.125 CC lib/init/rpc.o 00:03:15.125 CC lib/virtio/virtio_vfio_user.o 00:03:15.125 CC lib/virtio/virtio_pci.o 00:03:15.125 LIB libspdk_init.a 00:03:15.125 CC lib/accel/accel_rpc.o 00:03:15.125 SO libspdk_init.so.4.0 00:03:15.383 SYMLINK libspdk_init.so 00:03:15.383 CC lib/blob/request.o 00:03:15.383 CC lib/blob/zeroes.o 00:03:15.383 CC lib/accel/accel_sw.o 00:03:15.383 CC lib/blob/blob_bs_dev.o 00:03:15.383 CC lib/event/app.o 00:03:15.641 LIB libspdk_virtio.a 00:03:15.641 SO libspdk_virtio.so.6.0 00:03:15.641 CC lib/event/reactor.o 00:03:15.641 CC lib/event/log_rpc.o 00:03:15.641 SYMLINK libspdk_virtio.so 00:03:15.641 CC lib/event/app_rpc.o 00:03:15.641 CC lib/event/scheduler_static.o 00:03:15.899 LIB libspdk_accel.a 00:03:15.899 SO libspdk_accel.so.14.0 00:03:16.161 LIB libspdk_nvme.a 00:03:16.161 SYMLINK libspdk_accel.so 00:03:16.161 LIB libspdk_event.a 00:03:16.161 SO libspdk_event.so.12.0 00:03:16.161 SYMLINK libspdk_event.so 00:03:16.161 CC lib/bdev/bdev.o 00:03:16.161 CC lib/bdev/bdev_rpc.o 00:03:16.161 CC lib/bdev/bdev_zone.o 00:03:16.161 CC lib/bdev/part.o 00:03:16.161 CC lib/bdev/scsi_nvme.o 00:03:16.161 SO libspdk_nvme.so.12.0 00:03:16.418 SYMLINK libspdk_nvme.so 00:03:18.323 LIB libspdk_blob.a 00:03:18.323 SO libspdk_blob.so.10.1 00:03:18.323 SYMLINK libspdk_blob.so 00:03:18.582 CC lib/blobfs/blobfs.o 00:03:18.582 CC lib/blobfs/tree.o 00:03:18.582 CC lib/lvol/lvol.o 00:03:19.150 LIB libspdk_bdev.a 00:03:19.409 SO libspdk_bdev.so.14.0 00:03:19.409 SYMLINK libspdk_bdev.so 00:03:19.668 CC lib/scsi/dev.o 00:03:19.668 CC lib/scsi/lun.o 00:03:19.668 CC lib/scsi/port.o 00:03:19.668 CC lib/scsi/scsi.o 00:03:19.668 CC lib/nbd/nbd.o 00:03:19.668 LIB libspdk_blobfs.a 00:03:19.668 CC lib/ftl/ftl_core.o 00:03:19.668 CC lib/nvmf/ctrlr.o 00:03:19.668 CC lib/ublk/ublk.o 00:03:19.668 SO libspdk_blobfs.so.9.0 00:03:19.668 LIB libspdk_lvol.a 00:03:19.668 SO libspdk_lvol.so.9.1 00:03:19.668 SYMLINK libspdk_blobfs.so 00:03:19.668 CC lib/scsi/scsi_bdev.o 00:03:19.668 CC lib/nbd/nbd_rpc.o 00:03:19.668 SYMLINK libspdk_lvol.so 00:03:19.668 CC lib/ublk/ublk_rpc.o 00:03:19.668 CC lib/ftl/ftl_init.o 00:03:19.928 CC lib/scsi/scsi_pr.o 00:03:19.928 CC lib/scsi/scsi_rpc.o 00:03:19.928 CC lib/nvmf/ctrlr_discovery.o 00:03:19.928 CC lib/scsi/task.o 00:03:19.928 CC lib/ftl/ftl_layout.o 00:03:19.928 CC lib/nvmf/ctrlr_bdev.o 00:03:20.187 LIB libspdk_nbd.a 00:03:20.187 CC lib/nvmf/subsystem.o 00:03:20.187 SO libspdk_nbd.so.6.0 00:03:20.187 SYMLINK libspdk_nbd.so 00:03:20.187 CC lib/ftl/ftl_debug.o 00:03:20.187 CC lib/nvmf/nvmf.o 00:03:20.187 CC lib/nvmf/nvmf_rpc.o 00:03:20.445 LIB libspdk_scsi.a 00:03:20.445 CC lib/ftl/ftl_io.o 00:03:20.445 LIB libspdk_ublk.a 00:03:20.445 SO libspdk_scsi.so.8.0 00:03:20.445 SO libspdk_ublk.so.2.0 00:03:20.445 CC lib/nvmf/transport.o 00:03:20.445 SYMLINK libspdk_ublk.so 00:03:20.445 CC lib/ftl/ftl_sb.o 00:03:20.445 SYMLINK libspdk_scsi.so 00:03:20.445 CC lib/ftl/ftl_l2p.o 00:03:20.445 CC lib/ftl/ftl_l2p_flat.o 00:03:20.704 CC lib/ftl/ftl_nv_cache.o 00:03:20.704 CC lib/ftl/ftl_band.o 00:03:20.704 CC lib/ftl/ftl_band_ops.o 00:03:20.704 CC lib/ftl/ftl_writer.o 00:03:20.963 CC lib/nvmf/tcp.o 00:03:20.963 CC lib/ftl/ftl_rq.o 00:03:21.222 CC lib/nvmf/rdma.o 00:03:21.222 CC lib/ftl/ftl_reloc.o 00:03:21.222 CC lib/ftl/ftl_l2p_cache.o 00:03:21.222 CC lib/ftl/ftl_p2l.o 00:03:21.480 CC lib/iscsi/conn.o 00:03:21.480 CC lib/vhost/vhost.o 00:03:21.480 CC lib/vhost/vhost_rpc.o 00:03:21.739 CC lib/vhost/vhost_scsi.o 00:03:21.739 CC lib/ftl/mngt/ftl_mngt.o 00:03:21.739 CC lib/vhost/vhost_blk.o 00:03:21.739 CC lib/vhost/rte_vhost_user.o 00:03:21.998 CC lib/iscsi/init_grp.o 00:03:21.998 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:21.998 CC lib/iscsi/iscsi.o 00:03:22.256 CC lib/iscsi/md5.o 00:03:22.256 CC lib/iscsi/param.o 00:03:22.256 CC lib/iscsi/portal_grp.o 00:03:22.256 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:22.256 CC lib/iscsi/tgt_node.o 00:03:22.515 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:22.515 CC lib/iscsi/iscsi_subsystem.o 00:03:22.515 CC lib/iscsi/iscsi_rpc.o 00:03:22.515 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:22.773 CC lib/iscsi/task.o 00:03:22.773 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:22.773 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:23.032 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:23.032 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:23.032 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:23.032 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:23.032 LIB libspdk_vhost.a 00:03:23.032 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:23.032 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:23.032 SO libspdk_vhost.so.7.1 00:03:23.291 CC lib/ftl/utils/ftl_conf.o 00:03:23.291 CC lib/ftl/utils/ftl_md.o 00:03:23.291 CC lib/ftl/utils/ftl_mempool.o 00:03:23.291 SYMLINK libspdk_vhost.so 00:03:23.291 CC lib/ftl/utils/ftl_property.o 00:03:23.291 CC lib/ftl/utils/ftl_bitmap.o 00:03:23.291 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:23.291 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:23.291 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:23.551 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:23.551 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:23.551 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:23.551 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:23.551 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:23.551 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:23.551 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:23.551 CC lib/ftl/base/ftl_base_dev.o 00:03:23.551 CC lib/ftl/base/ftl_base_bdev.o 00:03:23.810 CC lib/ftl/ftl_trace.o 00:03:23.810 LIB libspdk_iscsi.a 00:03:23.810 LIB libspdk_nvmf.a 00:03:24.070 LIB libspdk_ftl.a 00:03:24.070 SO libspdk_iscsi.so.7.0 00:03:24.070 SO libspdk_nvmf.so.17.0 00:03:24.070 SYMLINK libspdk_iscsi.so 00:03:24.328 SO libspdk_ftl.so.8.0 00:03:24.328 SYMLINK libspdk_nvmf.so 00:03:24.328 SYMLINK libspdk_ftl.so 00:03:24.586 CC module/env_dpdk/env_dpdk_rpc.o 00:03:24.586 CC module/blob/bdev/blob_bdev.o 00:03:24.586 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:24.586 CC module/accel/dsa/accel_dsa.o 00:03:24.586 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:24.586 CC module/sock/posix/posix.o 00:03:24.586 CC module/scheduler/gscheduler/gscheduler.o 00:03:24.586 CC module/accel/error/accel_error.o 00:03:24.586 CC module/accel/iaa/accel_iaa.o 00:03:24.845 CC module/accel/ioat/accel_ioat.o 00:03:24.845 LIB libspdk_env_dpdk_rpc.a 00:03:24.845 SO libspdk_env_dpdk_rpc.so.5.0 00:03:24.845 LIB libspdk_scheduler_gscheduler.a 00:03:24.845 LIB libspdk_scheduler_dpdk_governor.a 00:03:24.845 SYMLINK libspdk_env_dpdk_rpc.so 00:03:24.845 CC module/accel/iaa/accel_iaa_rpc.o 00:03:24.845 SO libspdk_scheduler_gscheduler.so.3.0 00:03:24.845 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:24.845 CC module/accel/error/accel_error_rpc.o 00:03:24.845 LIB libspdk_scheduler_dynamic.a 00:03:24.845 SYMLINK libspdk_scheduler_gscheduler.so 00:03:24.845 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:24.845 CC module/accel/dsa/accel_dsa_rpc.o 00:03:24.845 CC module/accel/ioat/accel_ioat_rpc.o 00:03:24.845 SO libspdk_scheduler_dynamic.so.3.0 00:03:25.104 LIB libspdk_blob_bdev.a 00:03:25.104 LIB libspdk_accel_iaa.a 00:03:25.104 SYMLINK libspdk_scheduler_dynamic.so 00:03:25.104 SO libspdk_blob_bdev.so.10.1 00:03:25.104 SO libspdk_accel_iaa.so.2.0 00:03:25.104 LIB libspdk_accel_error.a 00:03:25.104 SO libspdk_accel_error.so.1.0 00:03:25.104 LIB libspdk_accel_dsa.a 00:03:25.104 SYMLINK libspdk_blob_bdev.so 00:03:25.104 LIB libspdk_accel_ioat.a 00:03:25.104 SYMLINK libspdk_accel_iaa.so 00:03:25.104 SO libspdk_accel_dsa.so.4.0 00:03:25.104 SO libspdk_accel_ioat.so.5.0 00:03:25.104 SYMLINK libspdk_accel_error.so 00:03:25.104 SYMLINK libspdk_accel_ioat.so 00:03:25.104 SYMLINK libspdk_accel_dsa.so 00:03:25.363 CC module/bdev/delay/vbdev_delay.o 00:03:25.363 CC module/bdev/error/vbdev_error.o 00:03:25.363 CC module/blobfs/bdev/blobfs_bdev.o 00:03:25.363 CC module/bdev/malloc/bdev_malloc.o 00:03:25.363 CC module/bdev/lvol/vbdev_lvol.o 00:03:25.363 CC module/bdev/gpt/gpt.o 00:03:25.363 CC module/bdev/null/bdev_null.o 00:03:25.363 CC module/bdev/nvme/bdev_nvme.o 00:03:25.363 CC module/bdev/passthru/vbdev_passthru.o 00:03:25.363 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:25.622 CC module/bdev/gpt/vbdev_gpt.o 00:03:25.622 CC module/bdev/error/vbdev_error_rpc.o 00:03:25.622 CC module/bdev/null/bdev_null_rpc.o 00:03:25.622 LIB libspdk_sock_posix.a 00:03:25.622 LIB libspdk_blobfs_bdev.a 00:03:25.622 SO libspdk_sock_posix.so.5.0 00:03:25.622 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:25.622 SO libspdk_blobfs_bdev.so.5.0 00:03:25.622 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:25.622 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:25.622 SYMLINK libspdk_sock_posix.so 00:03:25.881 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:25.881 SYMLINK libspdk_blobfs_bdev.so 00:03:25.881 LIB libspdk_bdev_error.a 00:03:25.881 LIB libspdk_bdev_null.a 00:03:25.881 SO libspdk_bdev_error.so.5.0 00:03:25.881 LIB libspdk_bdev_gpt.a 00:03:25.881 SO libspdk_bdev_null.so.5.0 00:03:25.881 SO libspdk_bdev_gpt.so.5.0 00:03:25.881 LIB libspdk_bdev_passthru.a 00:03:25.881 CC module/bdev/raid/bdev_raid.o 00:03:25.881 SYMLINK libspdk_bdev_error.so 00:03:25.881 CC module/bdev/raid/bdev_raid_rpc.o 00:03:25.881 SO libspdk_bdev_passthru.so.5.0 00:03:25.881 LIB libspdk_bdev_delay.a 00:03:25.881 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:25.881 LIB libspdk_bdev_malloc.a 00:03:25.881 SYMLINK libspdk_bdev_null.so 00:03:25.881 SYMLINK libspdk_bdev_gpt.so 00:03:25.881 SO libspdk_bdev_delay.so.5.0 00:03:25.881 SO libspdk_bdev_malloc.so.5.0 00:03:25.881 SYMLINK libspdk_bdev_passthru.so 00:03:25.881 CC module/bdev/raid/bdev_raid_sb.o 00:03:25.881 SYMLINK libspdk_bdev_malloc.so 00:03:25.881 SYMLINK libspdk_bdev_delay.so 00:03:25.881 CC module/bdev/split/vbdev_split.o 00:03:25.881 CC module/bdev/nvme/nvme_rpc.o 00:03:26.140 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:26.140 CC module/bdev/xnvme/bdev_xnvme.o 00:03:26.140 CC module/bdev/raid/raid0.o 00:03:26.140 CC module/bdev/raid/raid1.o 00:03:26.140 CC module/bdev/split/vbdev_split_rpc.o 00:03:26.140 LIB libspdk_bdev_lvol.a 00:03:26.397 CC module/bdev/raid/concat.o 00:03:26.397 SO libspdk_bdev_lvol.so.5.0 00:03:26.397 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:26.397 SYMLINK libspdk_bdev_lvol.so 00:03:26.397 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:26.397 CC module/bdev/nvme/bdev_mdns_client.o 00:03:26.397 LIB libspdk_bdev_split.a 00:03:26.397 SO libspdk_bdev_split.so.5.0 00:03:26.655 CC module/bdev/nvme/vbdev_opal.o 00:03:26.655 SYMLINK libspdk_bdev_split.so 00:03:26.655 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:26.655 LIB libspdk_bdev_xnvme.a 00:03:26.655 CC module/bdev/aio/bdev_aio.o 00:03:26.655 LIB libspdk_bdev_zone_block.a 00:03:26.655 SO libspdk_bdev_xnvme.so.2.0 00:03:26.655 SO libspdk_bdev_zone_block.so.5.0 00:03:26.655 CC module/bdev/ftl/bdev_ftl.o 00:03:26.655 CC module/bdev/iscsi/bdev_iscsi.o 00:03:26.655 SYMLINK libspdk_bdev_xnvme.so 00:03:26.655 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:26.655 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:26.655 SYMLINK libspdk_bdev_zone_block.so 00:03:26.655 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:26.914 CC module/bdev/aio/bdev_aio_rpc.o 00:03:26.914 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:26.914 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:26.914 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:26.914 LIB libspdk_bdev_ftl.a 00:03:26.914 LIB libspdk_bdev_aio.a 00:03:26.914 LIB libspdk_bdev_raid.a 00:03:26.914 SO libspdk_bdev_aio.so.5.0 00:03:26.914 SO libspdk_bdev_ftl.so.5.0 00:03:27.172 SO libspdk_bdev_raid.so.5.0 00:03:27.172 SYMLINK libspdk_bdev_aio.so 00:03:27.172 SYMLINK libspdk_bdev_ftl.so 00:03:27.172 LIB libspdk_bdev_iscsi.a 00:03:27.172 SO libspdk_bdev_iscsi.so.5.0 00:03:27.172 SYMLINK libspdk_bdev_raid.so 00:03:27.172 SYMLINK libspdk_bdev_iscsi.so 00:03:27.431 LIB libspdk_bdev_virtio.a 00:03:27.431 SO libspdk_bdev_virtio.so.5.0 00:03:27.431 SYMLINK libspdk_bdev_virtio.so 00:03:28.000 LIB libspdk_bdev_nvme.a 00:03:28.000 SO libspdk_bdev_nvme.so.6.0 00:03:28.000 SYMLINK libspdk_bdev_nvme.so 00:03:28.568 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:28.568 CC module/event/subsystems/iobuf/iobuf.o 00:03:28.568 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:28.568 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.568 CC module/event/subsystems/sock/sock.o 00:03:28.568 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:28.568 CC module/event/subsystems/vmd/vmd.o 00:03:28.568 LIB libspdk_event_sock.a 00:03:28.568 LIB libspdk_event_vhost_blk.a 00:03:28.568 SO libspdk_event_sock.so.4.0 00:03:28.568 LIB libspdk_event_scheduler.a 00:03:28.568 LIB libspdk_event_vmd.a 00:03:28.568 LIB libspdk_event_iobuf.a 00:03:28.568 SO libspdk_event_vhost_blk.so.2.0 00:03:28.568 SO libspdk_event_scheduler.so.3.0 00:03:28.568 SO libspdk_event_vmd.so.5.0 00:03:28.568 SO libspdk_event_iobuf.so.2.0 00:03:28.568 SYMLINK libspdk_event_sock.so 00:03:28.568 SYMLINK libspdk_event_scheduler.so 00:03:28.568 SYMLINK libspdk_event_vhost_blk.so 00:03:28.568 SYMLINK libspdk_event_vmd.so 00:03:28.568 SYMLINK libspdk_event_iobuf.so 00:03:28.827 CC module/event/subsystems/accel/accel.o 00:03:29.104 LIB libspdk_event_accel.a 00:03:29.104 SO libspdk_event_accel.so.5.0 00:03:29.104 SYMLINK libspdk_event_accel.so 00:03:29.433 CC module/event/subsystems/bdev/bdev.o 00:03:29.433 LIB libspdk_event_bdev.a 00:03:29.433 SO libspdk_event_bdev.so.5.0 00:03:29.702 SYMLINK libspdk_event_bdev.so 00:03:29.702 CC module/event/subsystems/nbd/nbd.o 00:03:29.702 CC module/event/subsystems/scsi/scsi.o 00:03:29.702 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:29.702 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:29.702 CC module/event/subsystems/ublk/ublk.o 00:03:29.960 LIB libspdk_event_nbd.a 00:03:29.960 LIB libspdk_event_ublk.a 00:03:29.960 LIB libspdk_event_scsi.a 00:03:29.960 SO libspdk_event_nbd.so.5.0 00:03:29.960 SO libspdk_event_ublk.so.2.0 00:03:29.960 SO libspdk_event_scsi.so.5.0 00:03:29.960 SYMLINK libspdk_event_nbd.so 00:03:29.960 SYMLINK libspdk_event_ublk.so 00:03:29.960 SYMLINK libspdk_event_scsi.so 00:03:29.960 LIB libspdk_event_nvmf.a 00:03:29.960 SO libspdk_event_nvmf.so.5.0 00:03:30.219 SYMLINK libspdk_event_nvmf.so 00:03:30.219 CC module/event/subsystems/iscsi/iscsi.o 00:03:30.219 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:30.478 LIB libspdk_event_vhost_scsi.a 00:03:30.478 LIB libspdk_event_iscsi.a 00:03:30.478 SO libspdk_event_vhost_scsi.so.2.0 00:03:30.478 SO libspdk_event_iscsi.so.5.0 00:03:30.478 SYMLINK libspdk_event_vhost_scsi.so 00:03:30.478 SYMLINK libspdk_event_iscsi.so 00:03:30.478 SO libspdk.so.5.0 00:03:30.478 SYMLINK libspdk.so 00:03:30.737 CXX app/trace/trace.o 00:03:30.737 CC examples/nvme/hello_world/hello_world.o 00:03:30.737 CC examples/ioat/perf/perf.o 00:03:30.737 CC examples/accel/perf/accel_perf.o 00:03:30.737 CC examples/bdev/hello_world/hello_bdev.o 00:03:30.737 CC examples/blob/hello_world/hello_blob.o 00:03:30.737 CC test/bdev/bdevio/bdevio.o 00:03:30.737 CC test/blobfs/mkfs/mkfs.o 00:03:30.737 CC test/accel/dif/dif.o 00:03:30.737 CC test/app/bdev_svc/bdev_svc.o 00:03:30.995 LINK bdev_svc 00:03:30.995 LINK mkfs 00:03:31.253 LINK hello_world 00:03:31.253 LINK hello_bdev 00:03:31.253 LINK ioat_perf 00:03:31.253 LINK hello_blob 00:03:31.253 LINK spdk_trace 00:03:31.253 CC examples/ioat/verify/verify.o 00:03:31.253 LINK dif 00:03:31.253 LINK bdevio 00:03:31.511 CC examples/nvme/reconnect/reconnect.o 00:03:31.511 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:31.511 CC examples/bdev/bdevperf/bdevperf.o 00:03:31.511 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:31.511 CC app/trace_record/trace_record.o 00:03:31.511 LINK accel_perf 00:03:31.511 CC examples/blob/cli/blobcli.o 00:03:31.511 LINK verify 00:03:31.769 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:31.769 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:31.769 CC app/nvmf_tgt/nvmf_main.o 00:03:31.769 LINK spdk_trace_record 00:03:31.769 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:31.769 LINK reconnect 00:03:31.769 LINK nvme_fuzz 00:03:32.027 LINK nvmf_tgt 00:03:32.027 CC app/iscsi_tgt/iscsi_tgt.o 00:03:32.027 CC examples/sock/hello_world/hello_sock.o 00:03:32.027 LINK nvme_manage 00:03:32.027 LINK blobcli 00:03:32.027 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.285 LINK iscsi_tgt 00:03:32.285 CC examples/nvmf/nvmf/nvmf.o 00:03:32.285 CC examples/util/zipf/zipf.o 00:03:32.285 CC examples/nvme/arbitration/arbitration.o 00:03:32.285 LINK vhost_fuzz 00:03:32.285 LINK lsvmd 00:03:32.285 LINK hello_sock 00:03:32.285 CC test/app/histogram_perf/histogram_perf.o 00:03:32.285 LINK bdevperf 00:03:32.543 LINK zipf 00:03:32.543 CC app/spdk_tgt/spdk_tgt.o 00:03:32.543 CC examples/vmd/led/led.o 00:03:32.543 LINK histogram_perf 00:03:32.543 TEST_HEADER include/spdk/accel.h 00:03:32.543 TEST_HEADER include/spdk/accel_module.h 00:03:32.543 TEST_HEADER include/spdk/assert.h 00:03:32.543 TEST_HEADER include/spdk/barrier.h 00:03:32.543 TEST_HEADER include/spdk/base64.h 00:03:32.543 TEST_HEADER include/spdk/bdev.h 00:03:32.543 TEST_HEADER include/spdk/bdev_module.h 00:03:32.543 TEST_HEADER include/spdk/bdev_zone.h 00:03:32.543 TEST_HEADER include/spdk/bit_array.h 00:03:32.543 TEST_HEADER include/spdk/bit_pool.h 00:03:32.543 TEST_HEADER include/spdk/blob_bdev.h 00:03:32.543 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:32.543 TEST_HEADER include/spdk/blobfs.h 00:03:32.543 TEST_HEADER include/spdk/blob.h 00:03:32.543 TEST_HEADER include/spdk/conf.h 00:03:32.543 LINK nvmf 00:03:32.543 TEST_HEADER include/spdk/config.h 00:03:32.543 TEST_HEADER include/spdk/cpuset.h 00:03:32.543 TEST_HEADER include/spdk/crc16.h 00:03:32.543 TEST_HEADER include/spdk/crc32.h 00:03:32.543 TEST_HEADER include/spdk/crc64.h 00:03:32.543 TEST_HEADER include/spdk/dif.h 00:03:32.543 TEST_HEADER include/spdk/dma.h 00:03:32.543 TEST_HEADER include/spdk/endian.h 00:03:32.543 CC examples/thread/thread/thread_ex.o 00:03:32.543 TEST_HEADER include/spdk/env_dpdk.h 00:03:32.543 TEST_HEADER include/spdk/env.h 00:03:32.543 TEST_HEADER include/spdk/event.h 00:03:32.543 TEST_HEADER include/spdk/fd_group.h 00:03:32.543 TEST_HEADER include/spdk/fd.h 00:03:32.543 TEST_HEADER include/spdk/file.h 00:03:32.543 TEST_HEADER include/spdk/ftl.h 00:03:32.543 TEST_HEADER include/spdk/gpt_spec.h 00:03:32.543 TEST_HEADER include/spdk/hexlify.h 00:03:32.543 TEST_HEADER include/spdk/histogram_data.h 00:03:32.543 TEST_HEADER include/spdk/idxd.h 00:03:32.543 TEST_HEADER include/spdk/idxd_spec.h 00:03:32.543 TEST_HEADER include/spdk/init.h 00:03:32.543 CC app/spdk_lspci/spdk_lspci.o 00:03:32.543 TEST_HEADER include/spdk/ioat.h 00:03:32.543 TEST_HEADER include/spdk/ioat_spec.h 00:03:32.543 TEST_HEADER include/spdk/iscsi_spec.h 00:03:32.543 TEST_HEADER include/spdk/json.h 00:03:32.543 TEST_HEADER include/spdk/jsonrpc.h 00:03:32.543 TEST_HEADER include/spdk/likely.h 00:03:32.544 TEST_HEADER include/spdk/log.h 00:03:32.544 TEST_HEADER include/spdk/lvol.h 00:03:32.544 TEST_HEADER include/spdk/memory.h 00:03:32.544 TEST_HEADER include/spdk/mmio.h 00:03:32.544 TEST_HEADER include/spdk/nbd.h 00:03:32.544 TEST_HEADER include/spdk/notify.h 00:03:32.544 TEST_HEADER include/spdk/nvme.h 00:03:32.544 TEST_HEADER include/spdk/nvme_intel.h 00:03:32.544 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:32.544 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:32.544 TEST_HEADER include/spdk/nvme_spec.h 00:03:32.544 TEST_HEADER include/spdk/nvme_zns.h 00:03:32.544 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:32.544 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:32.544 TEST_HEADER include/spdk/nvmf.h 00:03:32.801 TEST_HEADER include/spdk/nvmf_spec.h 00:03:32.801 TEST_HEADER include/spdk/nvmf_transport.h 00:03:32.801 LINK led 00:03:32.801 CC examples/idxd/perf/perf.o 00:03:32.801 TEST_HEADER include/spdk/opal.h 00:03:32.801 TEST_HEADER include/spdk/opal_spec.h 00:03:32.801 TEST_HEADER include/spdk/pci_ids.h 00:03:32.801 TEST_HEADER include/spdk/pipe.h 00:03:32.801 LINK arbitration 00:03:32.801 TEST_HEADER include/spdk/queue.h 00:03:32.801 TEST_HEADER include/spdk/reduce.h 00:03:32.801 TEST_HEADER include/spdk/rpc.h 00:03:32.801 TEST_HEADER include/spdk/scheduler.h 00:03:32.801 TEST_HEADER include/spdk/scsi.h 00:03:32.801 TEST_HEADER include/spdk/scsi_spec.h 00:03:32.801 TEST_HEADER include/spdk/sock.h 00:03:32.801 TEST_HEADER include/spdk/stdinc.h 00:03:32.801 TEST_HEADER include/spdk/string.h 00:03:32.801 TEST_HEADER include/spdk/thread.h 00:03:32.801 TEST_HEADER include/spdk/trace.h 00:03:32.801 TEST_HEADER include/spdk/trace_parser.h 00:03:32.802 TEST_HEADER include/spdk/tree.h 00:03:32.802 TEST_HEADER include/spdk/ublk.h 00:03:32.802 TEST_HEADER include/spdk/util.h 00:03:32.802 TEST_HEADER include/spdk/uuid.h 00:03:32.802 TEST_HEADER include/spdk/version.h 00:03:32.802 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:32.802 LINK spdk_tgt 00:03:32.802 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:32.802 TEST_HEADER include/spdk/vhost.h 00:03:32.802 TEST_HEADER include/spdk/vmd.h 00:03:32.802 TEST_HEADER include/spdk/xor.h 00:03:32.802 TEST_HEADER include/spdk/zipf.h 00:03:32.802 CXX test/cpp_headers/accel.o 00:03:32.802 LINK spdk_lspci 00:03:32.802 CXX test/cpp_headers/accel_module.o 00:03:32.802 CC test/dma/test_dma/test_dma.o 00:03:32.802 CXX test/cpp_headers/assert.o 00:03:32.802 LINK thread 00:03:32.802 CC examples/nvme/hotplug/hotplug.o 00:03:33.059 CC test/app/jsoncat/jsoncat.o 00:03:33.059 CC app/spdk_nvme_perf/perf.o 00:03:33.059 CC app/spdk_nvme_identify/identify.o 00:03:33.059 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:33.059 CXX test/cpp_headers/barrier.o 00:03:33.059 CXX test/cpp_headers/base64.o 00:03:33.059 LINK idxd_perf 00:03:33.059 LINK jsoncat 00:03:33.318 LINK hotplug 00:03:33.318 LINK interrupt_tgt 00:03:33.318 LINK test_dma 00:03:33.318 CXX test/cpp_headers/bdev.o 00:03:33.318 CXX test/cpp_headers/bdev_module.o 00:03:33.318 CC test/app/stub/stub.o 00:03:33.318 CXX test/cpp_headers/bdev_zone.o 00:03:33.318 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:33.318 CC examples/nvme/abort/abort.o 00:03:33.576 CXX test/cpp_headers/bit_array.o 00:03:33.576 LINK stub 00:03:33.576 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:33.576 CC test/event/event_perf/event_perf.o 00:03:33.576 CC test/env/mem_callbacks/mem_callbacks.o 00:03:33.576 LINK cmb_copy 00:03:33.576 CXX test/cpp_headers/bit_pool.o 00:03:33.576 CXX test/cpp_headers/blob_bdev.o 00:03:33.841 LINK event_perf 00:03:33.841 LINK pmr_persistence 00:03:33.841 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.841 LINK iscsi_fuzz 00:03:33.841 CC test/event/reactor/reactor.o 00:03:33.841 CC test/event/reactor_perf/reactor_perf.o 00:03:33.841 CXX test/cpp_headers/blobfs.o 00:03:33.841 LINK abort 00:03:33.841 CC test/event/app_repeat/app_repeat.o 00:03:33.841 CXX test/cpp_headers/blob.o 00:03:34.100 LINK spdk_nvme_identify 00:03:34.100 LINK reactor 00:03:34.100 LINK spdk_nvme_perf 00:03:34.100 LINK reactor_perf 00:03:34.100 CXX test/cpp_headers/conf.o 00:03:34.100 CXX test/cpp_headers/config.o 00:03:34.100 CXX test/cpp_headers/cpuset.o 00:03:34.100 LINK app_repeat 00:03:34.100 CC test/event/scheduler/scheduler.o 00:03:34.100 CXX test/cpp_headers/crc16.o 00:03:34.100 CXX test/cpp_headers/crc32.o 00:03:34.100 CC app/spdk_nvme_discover/discovery_aer.o 00:03:34.100 CC app/spdk_top/spdk_top.o 00:03:34.100 LINK mem_callbacks 00:03:34.358 CC app/vhost/vhost.o 00:03:34.358 CC app/spdk_dd/spdk_dd.o 00:03:34.358 CXX test/cpp_headers/crc64.o 00:03:34.358 CC test/env/vtophys/vtophys.o 00:03:34.358 CC app/fio/nvme/fio_plugin.o 00:03:34.358 LINK spdk_nvme_discover 00:03:34.358 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:34.358 LINK scheduler 00:03:34.358 CC test/lvol/esnap/esnap.o 00:03:34.616 LINK vhost 00:03:34.616 CXX test/cpp_headers/dif.o 00:03:34.616 LINK vtophys 00:03:34.616 CXX test/cpp_headers/dma.o 00:03:34.616 LINK env_dpdk_post_init 00:03:34.616 CXX test/cpp_headers/endian.o 00:03:34.616 CXX test/cpp_headers/env_dpdk.o 00:03:34.616 CXX test/cpp_headers/env.o 00:03:34.874 LINK spdk_dd 00:03:34.874 CXX test/cpp_headers/event.o 00:03:34.874 CXX test/cpp_headers/fd_group.o 00:03:34.874 CXX test/cpp_headers/fd.o 00:03:34.874 CXX test/cpp_headers/file.o 00:03:34.874 CC app/fio/bdev/fio_plugin.o 00:03:34.874 CC test/env/memory/memory_ut.o 00:03:34.874 CXX test/cpp_headers/ftl.o 00:03:35.132 CC test/env/pci/pci_ut.o 00:03:35.132 CXX test/cpp_headers/gpt_spec.o 00:03:35.132 LINK spdk_nvme 00:03:35.132 CXX test/cpp_headers/hexlify.o 00:03:35.132 CXX test/cpp_headers/histogram_data.o 00:03:35.132 CXX test/cpp_headers/idxd.o 00:03:35.132 CXX test/cpp_headers/idxd_spec.o 00:03:35.391 CXX test/cpp_headers/init.o 00:03:35.391 CC test/rpc_client/rpc_client_test.o 00:03:35.391 LINK spdk_top 00:03:35.391 CC test/nvme/aer/aer.o 00:03:35.391 CXX test/cpp_headers/ioat.o 00:03:35.391 CXX test/cpp_headers/ioat_spec.o 00:03:35.391 CXX test/cpp_headers/iscsi_spec.o 00:03:35.391 LINK rpc_client_test 00:03:35.650 LINK spdk_bdev 00:03:35.650 LINK pci_ut 00:03:35.650 CXX test/cpp_headers/json.o 00:03:35.650 CC test/thread/poller_perf/poller_perf.o 00:03:35.650 CXX test/cpp_headers/jsonrpc.o 00:03:35.650 CXX test/cpp_headers/likely.o 00:03:35.650 LINK aer 00:03:35.650 CC test/nvme/reset/reset.o 00:03:35.650 CC test/nvme/sgl/sgl.o 00:03:35.650 CXX test/cpp_headers/log.o 00:03:35.650 LINK poller_perf 00:03:35.909 CC test/nvme/e2edp/nvme_dp.o 00:03:35.909 CC test/nvme/overhead/overhead.o 00:03:35.909 CXX test/cpp_headers/lvol.o 00:03:35.909 CC test/nvme/err_injection/err_injection.o 00:03:35.909 CC test/nvme/startup/startup.o 00:03:35.909 CC test/nvme/reserve/reserve.o 00:03:35.909 LINK memory_ut 00:03:35.909 LINK reset 00:03:35.909 LINK sgl 00:03:36.168 CXX test/cpp_headers/memory.o 00:03:36.168 LINK startup 00:03:36.168 LINK err_injection 00:03:36.168 CXX test/cpp_headers/mmio.o 00:03:36.168 LINK nvme_dp 00:03:36.168 LINK reserve 00:03:36.168 CC test/nvme/simple_copy/simple_copy.o 00:03:36.168 LINK overhead 00:03:36.168 CC test/nvme/connect_stress/connect_stress.o 00:03:36.426 CC test/nvme/boot_partition/boot_partition.o 00:03:36.426 CXX test/cpp_headers/nbd.o 00:03:36.426 CXX test/cpp_headers/notify.o 00:03:36.426 CC test/nvme/compliance/nvme_compliance.o 00:03:36.426 CC test/nvme/fused_ordering/fused_ordering.o 00:03:36.426 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:36.426 CC test/nvme/fdp/fdp.o 00:03:36.426 CC test/nvme/cuse/cuse.o 00:03:36.426 LINK connect_stress 00:03:36.426 CXX test/cpp_headers/nvme.o 00:03:36.426 LINK boot_partition 00:03:36.426 LINK simple_copy 00:03:36.426 LINK fused_ordering 00:03:36.687 LINK doorbell_aers 00:03:36.687 CXX test/cpp_headers/nvme_intel.o 00:03:36.687 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.687 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.687 CXX test/cpp_headers/nvme_spec.o 00:03:36.687 CXX test/cpp_headers/nvme_zns.o 00:03:36.687 LINK nvme_compliance 00:03:36.687 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.687 LINK fdp 00:03:36.687 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.945 CXX test/cpp_headers/nvmf.o 00:03:36.945 CXX test/cpp_headers/nvmf_spec.o 00:03:36.945 CXX test/cpp_headers/nvmf_transport.o 00:03:36.945 CXX test/cpp_headers/opal.o 00:03:36.945 CXX test/cpp_headers/opal_spec.o 00:03:36.945 CXX test/cpp_headers/pci_ids.o 00:03:36.945 CXX test/cpp_headers/pipe.o 00:03:36.945 CXX test/cpp_headers/queue.o 00:03:36.945 CXX test/cpp_headers/reduce.o 00:03:36.945 CXX test/cpp_headers/rpc.o 00:03:36.945 CXX test/cpp_headers/scheduler.o 00:03:36.945 CXX test/cpp_headers/scsi.o 00:03:37.204 CXX test/cpp_headers/scsi_spec.o 00:03:37.204 CXX test/cpp_headers/sock.o 00:03:37.204 CXX test/cpp_headers/stdinc.o 00:03:37.204 CXX test/cpp_headers/string.o 00:03:37.204 CXX test/cpp_headers/thread.o 00:03:37.204 CXX test/cpp_headers/trace.o 00:03:37.204 CXX test/cpp_headers/trace_parser.o 00:03:37.204 CXX test/cpp_headers/tree.o 00:03:37.204 CXX test/cpp_headers/ublk.o 00:03:37.204 CXX test/cpp_headers/util.o 00:03:37.204 CXX test/cpp_headers/uuid.o 00:03:37.204 CXX test/cpp_headers/version.o 00:03:37.204 CXX test/cpp_headers/vfio_user_pci.o 00:03:37.204 CXX test/cpp_headers/vfio_user_spec.o 00:03:37.463 CXX test/cpp_headers/vhost.o 00:03:37.463 CXX test/cpp_headers/vmd.o 00:03:37.463 CXX test/cpp_headers/xor.o 00:03:37.463 CXX test/cpp_headers/zipf.o 00:03:37.721 LINK cuse 00:03:40.258 LINK esnap 00:03:40.258 00:03:40.258 real 1m8.871s 00:03:40.258 user 7m3.308s 00:03:40.258 sys 1m25.090s 00:03:40.258 20:50:01 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:40.258 20:50:01 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.258 ************************************ 00:03:40.258 END TEST make 00:03:40.258 ************************************ 00:03:40.518 20:50:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:40.518 20:50:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:40.518 20:50:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:40.518 20:50:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:40.518 20:50:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:40.518 20:50:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:40.518 20:50:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:40.518 20:50:01 -- scripts/common.sh@335 -- # IFS=.-: 00:03:40.518 20:50:01 -- scripts/common.sh@335 -- # read -ra ver1 00:03:40.518 20:50:01 -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.518 20:50:01 -- scripts/common.sh@336 -- # read -ra ver2 00:03:40.518 20:50:01 -- scripts/common.sh@337 -- # local 'op=<' 00:03:40.518 20:50:01 -- scripts/common.sh@339 -- # ver1_l=2 00:03:40.518 20:50:01 -- scripts/common.sh@340 -- # ver2_l=1 00:03:40.518 20:50:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:40.518 20:50:01 -- scripts/common.sh@343 -- # case "$op" in 00:03:40.518 20:50:01 -- scripts/common.sh@344 -- # : 1 00:03:40.518 20:50:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:40.518 20:50:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.518 20:50:01 -- scripts/common.sh@364 -- # decimal 1 00:03:40.518 20:50:01 -- scripts/common.sh@352 -- # local d=1 00:03:40.518 20:50:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.518 20:50:01 -- scripts/common.sh@354 -- # echo 1 00:03:40.518 20:50:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:40.518 20:50:01 -- scripts/common.sh@365 -- # decimal 2 00:03:40.518 20:50:01 -- scripts/common.sh@352 -- # local d=2 00:03:40.518 20:50:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.518 20:50:01 -- scripts/common.sh@354 -- # echo 2 00:03:40.518 20:50:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:40.518 20:50:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:40.518 20:50:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:40.518 20:50:01 -- scripts/common.sh@367 -- # return 0 00:03:40.518 20:50:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.518 20:50:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:40.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.518 --rc genhtml_branch_coverage=1 00:03:40.518 --rc genhtml_function_coverage=1 00:03:40.518 --rc genhtml_legend=1 00:03:40.518 --rc geninfo_all_blocks=1 00:03:40.518 --rc geninfo_unexecuted_blocks=1 00:03:40.518 00:03:40.518 ' 00:03:40.518 20:50:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:40.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.518 --rc genhtml_branch_coverage=1 00:03:40.518 --rc genhtml_function_coverage=1 00:03:40.518 --rc genhtml_legend=1 00:03:40.518 --rc geninfo_all_blocks=1 00:03:40.518 --rc geninfo_unexecuted_blocks=1 00:03:40.518 00:03:40.518 ' 00:03:40.518 20:50:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:40.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.518 --rc genhtml_branch_coverage=1 00:03:40.518 --rc genhtml_function_coverage=1 00:03:40.518 --rc genhtml_legend=1 00:03:40.518 --rc geninfo_all_blocks=1 00:03:40.518 --rc geninfo_unexecuted_blocks=1 00:03:40.518 00:03:40.518 ' 00:03:40.518 20:50:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:40.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.518 --rc genhtml_branch_coverage=1 00:03:40.518 --rc genhtml_function_coverage=1 00:03:40.518 --rc genhtml_legend=1 00:03:40.518 --rc geninfo_all_blocks=1 00:03:40.518 --rc geninfo_unexecuted_blocks=1 00:03:40.518 00:03:40.518 ' 00:03:40.518 20:50:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:40.518 20:50:01 -- nvmf/common.sh@7 -- # uname -s 00:03:40.518 20:50:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:40.518 20:50:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:40.518 20:50:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:40.518 20:50:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:40.518 20:50:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:40.518 20:50:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:40.518 20:50:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:40.518 20:50:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:40.518 20:50:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:40.518 20:50:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:40.518 20:50:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c908302c-1db1-47eb-b733-054d9c59ff03 00:03:40.518 20:50:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=c908302c-1db1-47eb-b733-054d9c59ff03 00:03:40.518 20:50:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:40.518 20:50:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:40.518 20:50:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:40.518 20:50:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:40.518 20:50:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:40.518 20:50:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.518 20:50:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.518 20:50:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.518 20:50:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.518 20:50:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.518 20:50:01 -- paths/export.sh@5 -- # export PATH 00:03:40.518 20:50:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.518 20:50:01 -- nvmf/common.sh@46 -- # : 0 00:03:40.518 20:50:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:40.518 20:50:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:40.518 20:50:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:40.518 20:50:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:40.518 20:50:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:40.518 20:50:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:40.518 20:50:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:40.518 20:50:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:40.518 20:50:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:40.518 20:50:01 -- spdk/autotest.sh@32 -- # uname -s 00:03:40.778 20:50:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:40.778 20:50:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:40.778 20:50:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.778 20:50:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:40.778 20:50:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.778 20:50:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.778 20:50:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.778 20:50:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:40.778 20:50:01 -- spdk/autotest.sh@48 -- # udevadm_pid=48453 00:03:40.778 20:50:01 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:40.778 20:50:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:40.778 20:50:01 -- spdk/autotest.sh@54 -- # echo 48455 00:03:40.778 20:50:01 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:40.778 20:50:01 -- spdk/autotest.sh@56 -- # echo 48457 00:03:40.778 20:50:01 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:40.778 20:50:01 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:40.778 20:50:01 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:40.778 20:50:01 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:40.778 20:50:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:40.778 20:50:01 -- common/autotest_common.sh@10 -- # set +x 00:03:40.778 20:50:01 -- spdk/autotest.sh@70 -- # create_test_list 00:03:40.778 20:50:01 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:40.778 20:50:01 -- common/autotest_common.sh@10 -- # set +x 00:03:40.778 20:50:01 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:40.778 20:50:01 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:40.778 20:50:01 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:40.778 20:50:01 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:40.778 20:50:01 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:40.778 20:50:01 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:40.778 20:50:01 -- common/autotest_common.sh@1450 -- # uname 00:03:40.778 20:50:01 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:40.778 20:50:01 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:40.778 20:50:01 -- common/autotest_common.sh@1470 -- # uname 00:03:40.778 20:50:01 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:40.778 20:50:01 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:40.778 20:50:01 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:40.778 lcov: LCOV version 1.15 00:03:40.778 20:50:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:48.906 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:48.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:48.906 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:48.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:48.906 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:48.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:07.000 20:50:25 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:07.000 20:50:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:07.000 20:50:25 -- common/autotest_common.sh@10 -- # set +x 00:04:07.000 20:50:25 -- spdk/autotest.sh@89 -- # rm -f 00:04:07.000 20:50:25 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.000 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:07.000 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:07.000 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:07.000 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:07.000 20:50:26 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:07.000 20:50:26 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:07.000 20:50:26 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:07.000 20:50:26 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:07.000 20:50:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.000 20:50:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:07.000 20:50:26 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:07.000 20:50:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.000 20:50:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:07.000 20:50:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:07.000 20:50:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.000 20:50:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:07.000 20:50:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:07.000 20:50:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.000 20:50:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:07.000 20:50:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:07.000 20:50:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.000 20:50:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2c2n1 00:04:07.000 20:50:26 -- common/autotest_common.sh@1657 -- # local device=nvme2c2n1 00:04:07.000 20:50:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.000 20:50:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:07.000 20:50:26 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:07.000 20:50:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.000 20:50:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:07.000 20:50:26 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:07.000 20:50:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:07.000 20:50:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.000 20:50:26 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:07.000 20:50:26 -- spdk/autotest.sh@108 -- # grep -v p 00:04:07.000 20:50:26 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 /dev/nvme2n1 /dev/nvme3n1 00:04:07.000 20:50:26 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.000 20:50:26 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:07.000 20:50:26 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:07.000 20:50:26 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:07.000 20:50:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:07.000 No valid GPT data, bailing 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # pt= 00:04:07.000 20:50:27 -- scripts/common.sh@394 -- # return 1 00:04:07.000 20:50:27 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:07.000 1+0 records in 00:04:07.000 1+0 records out 00:04:07.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443372 s, 237 MB/s 00:04:07.000 20:50:27 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.000 20:50:27 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:07.000 20:50:27 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:07.000 20:50:27 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:07.000 20:50:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:07.000 No valid GPT data, bailing 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # pt= 00:04:07.000 20:50:27 -- scripts/common.sh@394 -- # return 1 00:04:07.000 20:50:27 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:07.000 1+0 records in 00:04:07.000 1+0 records out 00:04:07.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396652 s, 264 MB/s 00:04:07.000 20:50:27 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.000 20:50:27 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:07.000 20:50:27 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:07.000 20:50:27 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:07.000 20:50:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:07.000 No valid GPT data, bailing 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # pt= 00:04:07.000 20:50:27 -- scripts/common.sh@394 -- # return 1 00:04:07.000 20:50:27 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:07.000 1+0 records in 00:04:07.000 1+0 records out 00:04:07.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415696 s, 252 MB/s 00:04:07.000 20:50:27 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.000 20:50:27 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:07.000 20:50:27 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:07.000 20:50:27 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:07.000 20:50:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:07.000 No valid GPT data, bailing 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # pt= 00:04:07.000 20:50:27 -- scripts/common.sh@394 -- # return 1 00:04:07.000 20:50:27 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:07.000 1+0 records in 00:04:07.000 1+0 records out 00:04:07.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379846 s, 276 MB/s 00:04:07.000 20:50:27 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.000 20:50:27 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:07.000 20:50:27 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:04:07.000 20:50:27 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:07.000 20:50:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:07.000 No valid GPT data, bailing 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # pt= 00:04:07.000 20:50:27 -- scripts/common.sh@394 -- # return 1 00:04:07.000 20:50:27 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:07.000 1+0 records in 00:04:07.000 1+0 records out 00:04:07.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405462 s, 259 MB/s 00:04:07.000 20:50:27 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.000 20:50:27 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:07.000 20:50:27 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:04:07.000 20:50:27 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:07.000 20:50:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:07.000 No valid GPT data, bailing 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:07.000 20:50:27 -- scripts/common.sh@393 -- # pt= 00:04:07.000 20:50:27 -- scripts/common.sh@394 -- # return 1 00:04:07.000 20:50:27 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:07.000 1+0 records in 00:04:07.000 1+0 records out 00:04:07.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140592 s, 74.6 MB/s 00:04:07.000 20:50:27 -- spdk/autotest.sh@116 -- # sync 00:04:07.000 20:50:27 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:07.000 20:50:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:07.000 20:50:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:08.907 20:50:29 -- spdk/autotest.sh@122 -- # uname -s 00:04:08.907 20:50:29 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:08.907 20:50:29 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:08.907 20:50:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.907 20:50:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.907 20:50:29 -- common/autotest_common.sh@10 -- # set +x 00:04:08.907 ************************************ 00:04:08.907 START TEST setup.sh 00:04:08.907 ************************************ 00:04:08.907 20:50:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:08.907 * Looking for test storage... 00:04:08.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.907 20:50:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:08.907 20:50:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:08.907 20:50:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:08.907 20:50:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:08.907 20:50:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:08.907 20:50:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:08.907 20:50:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:08.907 20:50:29 -- scripts/common.sh@335 -- # IFS=.-: 00:04:08.907 20:50:29 -- scripts/common.sh@335 -- # read -ra ver1 00:04:08.907 20:50:29 -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.907 20:50:29 -- scripts/common.sh@336 -- # read -ra ver2 00:04:08.907 20:50:29 -- scripts/common.sh@337 -- # local 'op=<' 00:04:08.907 20:50:29 -- scripts/common.sh@339 -- # ver1_l=2 00:04:08.907 20:50:29 -- scripts/common.sh@340 -- # ver2_l=1 00:04:08.907 20:50:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:08.907 20:50:29 -- scripts/common.sh@343 -- # case "$op" in 00:04:08.907 20:50:29 -- scripts/common.sh@344 -- # : 1 00:04:08.907 20:50:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:08.907 20:50:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.907 20:50:29 -- scripts/common.sh@364 -- # decimal 1 00:04:08.907 20:50:29 -- scripts/common.sh@352 -- # local d=1 00:04:08.907 20:50:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.907 20:50:29 -- scripts/common.sh@354 -- # echo 1 00:04:08.907 20:50:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:08.907 20:50:29 -- scripts/common.sh@365 -- # decimal 2 00:04:08.907 20:50:29 -- scripts/common.sh@352 -- # local d=2 00:04:08.907 20:50:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.907 20:50:29 -- scripts/common.sh@354 -- # echo 2 00:04:08.907 20:50:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:08.907 20:50:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:08.907 20:50:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:08.907 20:50:29 -- scripts/common.sh@367 -- # return 0 00:04:08.907 20:50:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.907 20:50:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:08.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.907 --rc genhtml_branch_coverage=1 00:04:08.907 --rc genhtml_function_coverage=1 00:04:08.907 --rc genhtml_legend=1 00:04:08.907 --rc geninfo_all_blocks=1 00:04:08.907 --rc geninfo_unexecuted_blocks=1 00:04:08.907 00:04:08.907 ' 00:04:08.907 20:50:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:08.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.907 --rc genhtml_branch_coverage=1 00:04:08.907 --rc genhtml_function_coverage=1 00:04:08.907 --rc genhtml_legend=1 00:04:08.907 --rc geninfo_all_blocks=1 00:04:08.907 --rc geninfo_unexecuted_blocks=1 00:04:08.907 00:04:08.907 ' 00:04:08.907 20:50:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:08.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.907 --rc genhtml_branch_coverage=1 00:04:08.907 --rc genhtml_function_coverage=1 00:04:08.907 --rc genhtml_legend=1 00:04:08.907 --rc geninfo_all_blocks=1 00:04:08.907 --rc geninfo_unexecuted_blocks=1 00:04:08.907 00:04:08.907 ' 00:04:08.907 20:50:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:08.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.907 --rc genhtml_branch_coverage=1 00:04:08.907 --rc genhtml_function_coverage=1 00:04:08.907 --rc genhtml_legend=1 00:04:08.907 --rc geninfo_all_blocks=1 00:04:08.907 --rc geninfo_unexecuted_blocks=1 00:04:08.907 00:04:08.907 ' 00:04:08.907 20:50:29 -- setup/test-setup.sh@10 -- # uname -s 00:04:08.907 20:50:29 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:08.907 20:50:29 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:08.907 20:50:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.907 20:50:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.907 20:50:29 -- common/autotest_common.sh@10 -- # set +x 00:04:08.907 ************************************ 00:04:08.907 START TEST acl 00:04:08.907 ************************************ 00:04:08.907 20:50:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:09.166 * Looking for test storage... 00:04:09.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:09.166 20:50:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:09.166 20:50:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:09.166 20:50:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:09.166 20:50:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:09.166 20:50:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:09.166 20:50:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:09.166 20:50:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:09.167 20:50:30 -- scripts/common.sh@335 -- # IFS=.-: 00:04:09.167 20:50:30 -- scripts/common.sh@335 -- # read -ra ver1 00:04:09.167 20:50:30 -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.167 20:50:30 -- scripts/common.sh@336 -- # read -ra ver2 00:04:09.167 20:50:30 -- scripts/common.sh@337 -- # local 'op=<' 00:04:09.167 20:50:30 -- scripts/common.sh@339 -- # ver1_l=2 00:04:09.167 20:50:30 -- scripts/common.sh@340 -- # ver2_l=1 00:04:09.167 20:50:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:09.167 20:50:30 -- scripts/common.sh@343 -- # case "$op" in 00:04:09.167 20:50:30 -- scripts/common.sh@344 -- # : 1 00:04:09.167 20:50:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:09.167 20:50:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.167 20:50:30 -- scripts/common.sh@364 -- # decimal 1 00:04:09.167 20:50:30 -- scripts/common.sh@352 -- # local d=1 00:04:09.167 20:50:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.167 20:50:30 -- scripts/common.sh@354 -- # echo 1 00:04:09.167 20:50:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:09.167 20:50:30 -- scripts/common.sh@365 -- # decimal 2 00:04:09.167 20:50:30 -- scripts/common.sh@352 -- # local d=2 00:04:09.167 20:50:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.167 20:50:30 -- scripts/common.sh@354 -- # echo 2 00:04:09.167 20:50:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:09.167 20:50:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:09.167 20:50:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:09.167 20:50:30 -- scripts/common.sh@367 -- # return 0 00:04:09.167 20:50:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.167 20:50:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:09.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.167 --rc genhtml_branch_coverage=1 00:04:09.167 --rc genhtml_function_coverage=1 00:04:09.167 --rc genhtml_legend=1 00:04:09.167 --rc geninfo_all_blocks=1 00:04:09.167 --rc geninfo_unexecuted_blocks=1 00:04:09.167 00:04:09.167 ' 00:04:09.167 20:50:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:09.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.167 --rc genhtml_branch_coverage=1 00:04:09.167 --rc genhtml_function_coverage=1 00:04:09.167 --rc genhtml_legend=1 00:04:09.167 --rc geninfo_all_blocks=1 00:04:09.167 --rc geninfo_unexecuted_blocks=1 00:04:09.167 00:04:09.167 ' 00:04:09.167 20:50:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:09.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.167 --rc genhtml_branch_coverage=1 00:04:09.167 --rc genhtml_function_coverage=1 00:04:09.167 --rc genhtml_legend=1 00:04:09.167 --rc geninfo_all_blocks=1 00:04:09.167 --rc geninfo_unexecuted_blocks=1 00:04:09.167 00:04:09.167 ' 00:04:09.167 20:50:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:09.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.167 --rc genhtml_branch_coverage=1 00:04:09.167 --rc genhtml_function_coverage=1 00:04:09.167 --rc genhtml_legend=1 00:04:09.167 --rc geninfo_all_blocks=1 00:04:09.167 --rc geninfo_unexecuted_blocks=1 00:04:09.167 00:04:09.167 ' 00:04:09.167 20:50:30 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:09.167 20:50:30 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:09.167 20:50:30 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:09.167 20:50:30 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:09.167 20:50:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.167 20:50:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:09.167 20:50:30 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:09.167 20:50:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.167 20:50:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:09.167 20:50:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:09.167 20:50:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.167 20:50:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:09.167 20:50:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:09.167 20:50:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.167 20:50:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:09.167 20:50:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:09.167 20:50:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.167 20:50:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2c2n1 00:04:09.167 20:50:30 -- common/autotest_common.sh@1657 -- # local device=nvme2c2n1 00:04:09.167 20:50:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.167 20:50:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:09.167 20:50:30 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:09.167 20:50:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.167 20:50:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:09.167 20:50:30 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:09.167 20:50:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:09.167 20:50:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.167 20:50:30 -- setup/acl.sh@12 -- # devs=() 00:04:09.167 20:50:30 -- setup/acl.sh@12 -- # declare -a devs 00:04:09.167 20:50:30 -- setup/acl.sh@13 -- # drivers=() 00:04:09.167 20:50:30 -- setup/acl.sh@13 -- # declare -A drivers 00:04:09.167 20:50:30 -- setup/acl.sh@51 -- # setup reset 00:04:09.167 20:50:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.167 20:50:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.545 20:50:31 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:10.545 20:50:31 -- setup/acl.sh@16 -- # local dev driver 00:04:10.545 20:50:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.545 20:50:31 -- setup/acl.sh@15 -- # setup output status 00:04:10.545 20:50:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.545 20:50:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:10.545 Hugepages 00:04:10.545 node hugesize free / total 00:04:10.545 20:50:31 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:10.545 20:50:31 -- setup/acl.sh@19 -- # continue 00:04:10.545 20:50:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.545 00:04:10.545 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:10.545 20:50:31 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:10.545 20:50:31 -- setup/acl.sh@19 -- # continue 00:04:10.545 20:50:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.545 20:50:31 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:10.545 20:50:31 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:10.545 20:50:31 -- setup/acl.sh@20 -- # continue 00:04:10.545 20:50:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.804 20:50:31 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:10.804 20:50:31 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:10.804 20:50:31 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:10.804 20:50:31 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:10.804 20:50:31 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:10.804 20:50:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.804 20:50:31 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:10.804 20:50:31 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:10.804 20:50:31 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:10.804 20:50:31 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:10.804 20:50:31 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:10.804 20:50:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.804 20:50:31 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:04:10.804 20:50:31 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:10.804 20:50:31 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:10.804 20:50:31 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:10.804 20:50:31 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:10.804 20:50:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.064 20:50:31 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:04:11.064 20:50:31 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:11.064 20:50:31 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:11.064 20:50:31 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:11.064 20:50:31 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:11.064 20:50:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.064 20:50:31 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:11.064 20:50:31 -- setup/acl.sh@54 -- # run_test denied denied 00:04:11.064 20:50:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.064 20:50:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.064 20:50:31 -- common/autotest_common.sh@10 -- # set +x 00:04:11.064 ************************************ 00:04:11.064 START TEST denied 00:04:11.064 ************************************ 00:04:11.064 20:50:31 -- common/autotest_common.sh@1114 -- # denied 00:04:11.064 20:50:31 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:11.064 20:50:31 -- setup/acl.sh@38 -- # setup output config 00:04:11.064 20:50:31 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:11.064 20:50:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.064 20:50:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.445 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:12.445 20:50:33 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:12.445 20:50:33 -- setup/acl.sh@28 -- # local dev driver 00:04:12.445 20:50:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:12.445 20:50:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:12.445 20:50:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:12.445 20:50:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:12.445 20:50:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:12.445 20:50:33 -- setup/acl.sh@41 -- # setup reset 00:04:12.445 20:50:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.445 20:50:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.016 00:04:19.016 real 0m7.280s 00:04:19.016 user 0m0.915s 00:04:19.016 sys 0m1.424s 00:04:19.016 20:50:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.016 20:50:39 -- common/autotest_common.sh@10 -- # set +x 00:04:19.016 ************************************ 00:04:19.016 END TEST denied 00:04:19.016 ************************************ 00:04:19.016 20:50:39 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:19.016 20:50:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.017 20:50:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.017 20:50:39 -- common/autotest_common.sh@10 -- # set +x 00:04:19.017 ************************************ 00:04:19.017 START TEST allowed 00:04:19.017 ************************************ 00:04:19.017 20:50:39 -- common/autotest_common.sh@1114 -- # allowed 00:04:19.017 20:50:39 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:19.017 20:50:39 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:19.017 20:50:39 -- setup/acl.sh@45 -- # setup output config 00:04:19.017 20:50:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.017 20:50:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.584 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.584 20:50:40 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:19.584 20:50:40 -- setup/acl.sh@28 -- # local dev driver 00:04:19.584 20:50:40 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:19.584 20:50:40 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:19.584 20:50:40 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:19.584 20:50:40 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:19.584 20:50:40 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:19.584 20:50:40 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:19.584 20:50:40 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:19.584 20:50:40 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:19.584 20:50:40 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:19.584 20:50:40 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:19.584 20:50:40 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:19.584 20:50:40 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:19.584 20:50:40 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:19.584 20:50:40 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:19.584 20:50:40 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:19.584 20:50:40 -- setup/acl.sh@48 -- # setup reset 00:04:19.584 20:50:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.584 20:50:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.519 00:04:20.519 real 0m2.325s 00:04:20.519 user 0m1.091s 00:04:20.519 sys 0m1.242s 00:04:20.519 20:50:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.519 20:50:41 -- common/autotest_common.sh@10 -- # set +x 00:04:20.519 ************************************ 00:04:20.519 END TEST allowed 00:04:20.519 ************************************ 00:04:20.778 00:04:20.778 real 0m11.630s 00:04:20.778 user 0m2.949s 00:04:20.778 sys 0m3.782s 00:04:20.778 20:50:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.778 20:50:41 -- common/autotest_common.sh@10 -- # set +x 00:04:20.778 ************************************ 00:04:20.778 END TEST acl 00:04:20.778 ************************************ 00:04:20.778 20:50:41 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:20.778 20:50:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.778 20:50:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.778 20:50:41 -- common/autotest_common.sh@10 -- # set +x 00:04:20.778 ************************************ 00:04:20.778 START TEST hugepages 00:04:20.778 ************************************ 00:04:20.778 20:50:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:20.778 * Looking for test storage... 00:04:20.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.778 20:50:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:20.778 20:50:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:20.778 20:50:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:20.778 20:50:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:20.778 20:50:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:20.778 20:50:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:20.778 20:50:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:20.778 20:50:41 -- scripts/common.sh@335 -- # IFS=.-: 00:04:20.778 20:50:41 -- scripts/common.sh@335 -- # read -ra ver1 00:04:20.778 20:50:41 -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.778 20:50:41 -- scripts/common.sh@336 -- # read -ra ver2 00:04:20.778 20:50:41 -- scripts/common.sh@337 -- # local 'op=<' 00:04:20.778 20:50:41 -- scripts/common.sh@339 -- # ver1_l=2 00:04:20.778 20:50:41 -- scripts/common.sh@340 -- # ver2_l=1 00:04:20.778 20:50:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:20.778 20:50:41 -- scripts/common.sh@343 -- # case "$op" in 00:04:20.778 20:50:41 -- scripts/common.sh@344 -- # : 1 00:04:20.778 20:50:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:20.778 20:50:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.778 20:50:41 -- scripts/common.sh@364 -- # decimal 1 00:04:20.778 20:50:41 -- scripts/common.sh@352 -- # local d=1 00:04:20.778 20:50:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.778 20:50:41 -- scripts/common.sh@354 -- # echo 1 00:04:20.778 20:50:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:20.778 20:50:41 -- scripts/common.sh@365 -- # decimal 2 00:04:20.778 20:50:41 -- scripts/common.sh@352 -- # local d=2 00:04:20.778 20:50:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.778 20:50:41 -- scripts/common.sh@354 -- # echo 2 00:04:20.778 20:50:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:20.778 20:50:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:20.778 20:50:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:21.041 20:50:41 -- scripts/common.sh@367 -- # return 0 00:04:21.041 20:50:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.041 20:50:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:21.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.041 --rc genhtml_branch_coverage=1 00:04:21.041 --rc genhtml_function_coverage=1 00:04:21.041 --rc genhtml_legend=1 00:04:21.041 --rc geninfo_all_blocks=1 00:04:21.041 --rc geninfo_unexecuted_blocks=1 00:04:21.041 00:04:21.041 ' 00:04:21.041 20:50:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:21.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.041 --rc genhtml_branch_coverage=1 00:04:21.041 --rc genhtml_function_coverage=1 00:04:21.041 --rc genhtml_legend=1 00:04:21.041 --rc geninfo_all_blocks=1 00:04:21.041 --rc geninfo_unexecuted_blocks=1 00:04:21.041 00:04:21.041 ' 00:04:21.041 20:50:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:21.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.041 --rc genhtml_branch_coverage=1 00:04:21.041 --rc genhtml_function_coverage=1 00:04:21.041 --rc genhtml_legend=1 00:04:21.041 --rc geninfo_all_blocks=1 00:04:21.041 --rc geninfo_unexecuted_blocks=1 00:04:21.041 00:04:21.041 ' 00:04:21.041 20:50:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:21.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.041 --rc genhtml_branch_coverage=1 00:04:21.041 --rc genhtml_function_coverage=1 00:04:21.041 --rc genhtml_legend=1 00:04:21.041 --rc geninfo_all_blocks=1 00:04:21.041 --rc geninfo_unexecuted_blocks=1 00:04:21.041 00:04:21.041 ' 00:04:21.041 20:50:41 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:21.041 20:50:41 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:21.041 20:50:41 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:21.041 20:50:41 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:21.041 20:50:41 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:21.041 20:50:41 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:21.041 20:50:41 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:21.041 20:50:41 -- setup/common.sh@18 -- # local node= 00:04:21.041 20:50:41 -- setup/common.sh@19 -- # local var val 00:04:21.041 20:50:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.041 20:50:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.041 20:50:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.041 20:50:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.041 20:50:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.041 20:50:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 5793476 kB' 'MemAvailable: 7360104 kB' 'Buffers: 2684 kB' 'Cached: 1779596 kB' 'SwapCached: 0 kB' 'Active: 469148 kB' 'Inactive: 1428880 kB' 'Active(anon): 126280 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 117404 kB' 'Mapped: 53900 kB' 'Shmem: 10532 kB' 'KReclaimable: 63656 kB' 'Slab: 159344 kB' 'SReclaimable: 63656 kB' 'SUnreclaim: 95688 kB' 'KernelStack: 6464 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 317228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.041 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.041 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.042 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.042 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # continue 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.043 20:50:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.043 20:50:41 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.043 20:50:41 -- setup/common.sh@33 -- # echo 2048 00:04:21.043 20:50:41 -- setup/common.sh@33 -- # return 0 00:04:21.043 20:50:41 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:21.043 20:50:41 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:21.043 20:50:41 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:21.043 20:50:41 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:21.043 20:50:41 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:21.043 20:50:41 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:21.043 20:50:41 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:21.043 20:50:41 -- setup/hugepages.sh@207 -- # get_nodes 00:04:21.043 20:50:41 -- setup/hugepages.sh@27 -- # local node 00:04:21.043 20:50:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.043 20:50:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:21.043 20:50:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:21.043 20:50:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.043 20:50:41 -- setup/hugepages.sh@208 -- # clear_hp 00:04:21.043 20:50:41 -- setup/hugepages.sh@37 -- # local node hp 00:04:21.043 20:50:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.043 20:50:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.043 20:50:41 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.043 20:50:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.043 20:50:41 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.043 20:50:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:21.043 20:50:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:21.043 20:50:41 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:21.043 20:50:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.043 20:50:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.043 20:50:41 -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 ************************************ 00:04:21.043 START TEST default_setup 00:04:21.043 ************************************ 00:04:21.043 20:50:41 -- common/autotest_common.sh@1114 -- # default_setup 00:04:21.043 20:50:41 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:21.043 20:50:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.043 20:50:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.043 20:50:41 -- setup/hugepages.sh@51 -- # shift 00:04:21.043 20:50:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.043 20:50:41 -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.043 20:50:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.043 20:50:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.043 20:50:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.043 20:50:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.043 20:50:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.043 20:50:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.043 20:50:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:21.043 20:50:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.043 20:50:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.043 20:50:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.043 20:50:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.043 20:50:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.043 20:50:41 -- setup/hugepages.sh@73 -- # return 0 00:04:21.043 20:50:41 -- setup/hugepages.sh@137 -- # setup output 00:04:21.043 20:50:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.043 20:50:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.039 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.039 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.039 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.302 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.302 20:50:43 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:22.302 20:50:43 -- setup/hugepages.sh@89 -- # local node 00:04:22.302 20:50:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.302 20:50:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.302 20:50:43 -- setup/hugepages.sh@92 -- # local surp 00:04:22.302 20:50:43 -- setup/hugepages.sh@93 -- # local resv 00:04:22.302 20:50:43 -- setup/hugepages.sh@94 -- # local anon 00:04:22.302 20:50:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.302 20:50:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.302 20:50:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.302 20:50:43 -- setup/common.sh@18 -- # local node= 00:04:22.302 20:50:43 -- setup/common.sh@19 -- # local var val 00:04:22.302 20:50:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.302 20:50:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.302 20:50:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.302 20:50:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.302 20:50:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.302 20:50:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7887736 kB' 'MemAvailable: 9454072 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471540 kB' 'Inactive: 1428900 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 53528 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158772 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95744 kB' 'KernelStack: 6432 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.302 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.302 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.303 20:50:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.303 20:50:43 -- setup/common.sh@33 -- # echo 0 00:04:22.303 20:50:43 -- setup/common.sh@33 -- # return 0 00:04:22.303 20:50:43 -- setup/hugepages.sh@97 -- # anon=0 00:04:22.303 20:50:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.303 20:50:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.303 20:50:43 -- setup/common.sh@18 -- # local node= 00:04:22.303 20:50:43 -- setup/common.sh@19 -- # local var val 00:04:22.303 20:50:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.303 20:50:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.303 20:50:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.303 20:50:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.303 20:50:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.303 20:50:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.303 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7887736 kB' 'MemAvailable: 9454072 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471128 kB' 'Inactive: 1428900 kB' 'Active(anon): 128260 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119308 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158780 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95752 kB' 'KernelStack: 6416 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.304 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.304 20:50:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.305 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.305 20:50:43 -- setup/common.sh@33 -- # echo 0 00:04:22.305 20:50:43 -- setup/common.sh@33 -- # return 0 00:04:22.305 20:50:43 -- setup/hugepages.sh@99 -- # surp=0 00:04:22.305 20:50:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.305 20:50:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.305 20:50:43 -- setup/common.sh@18 -- # local node= 00:04:22.305 20:50:43 -- setup/common.sh@19 -- # local var val 00:04:22.305 20:50:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.305 20:50:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.305 20:50:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.305 20:50:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.305 20:50:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.305 20:50:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.305 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7887736 kB' 'MemAvailable: 9454072 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471388 kB' 'Inactive: 1428900 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119568 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158780 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95752 kB' 'KernelStack: 6416 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.306 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.306 20:50:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.307 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.307 20:50:43 -- setup/common.sh@33 -- # echo 0 00:04:22.307 20:50:43 -- setup/common.sh@33 -- # return 0 00:04:22.307 20:50:43 -- setup/hugepages.sh@100 -- # resv=0 00:04:22.307 nr_hugepages=1024 00:04:22.307 20:50:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.307 resv_hugepages=0 00:04:22.307 20:50:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.307 surplus_hugepages=0 00:04:22.307 20:50:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.307 anon_hugepages=0 00:04:22.307 20:50:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.307 20:50:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.307 20:50:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.307 20:50:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.307 20:50:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.307 20:50:43 -- setup/common.sh@18 -- # local node= 00:04:22.307 20:50:43 -- setup/common.sh@19 -- # local var val 00:04:22.307 20:50:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.307 20:50:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.307 20:50:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.307 20:50:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.307 20:50:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.307 20:50:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.307 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7887736 kB' 'MemAvailable: 9454072 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471212 kB' 'Inactive: 1428900 kB' 'Active(anon): 128344 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119432 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158736 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95708 kB' 'KernelStack: 6432 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.308 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.308 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.309 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.309 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.309 20:50:43 -- setup/common.sh@33 -- # echo 1024 00:04:22.309 20:50:43 -- setup/common.sh@33 -- # return 0 00:04:22.310 20:50:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.310 20:50:43 -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.310 20:50:43 -- setup/hugepages.sh@27 -- # local node 00:04:22.310 20:50:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.310 20:50:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.310 20:50:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.310 20:50:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.310 20:50:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.310 20:50:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.310 20:50:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.310 20:50:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.310 20:50:43 -- setup/common.sh@18 -- # local node=0 00:04:22.310 20:50:43 -- setup/common.sh@19 -- # local var val 00:04:22.310 20:50:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.310 20:50:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.310 20:50:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.310 20:50:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.310 20:50:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.310 20:50:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7887736 kB' 'MemUsed: 4351380 kB' 'SwapCached: 0 kB' 'Active: 470960 kB' 'Inactive: 1428900 kB' 'Active(anon): 128092 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1782268 kB' 'Mapped: 53472 kB' 'AnonPages: 119220 kB' 'Shmem: 10492 kB' 'KernelStack: 6448 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63028 kB' 'Slab: 158736 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.310 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.310 20:50:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # continue 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.311 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.311 20:50:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.311 20:50:43 -- setup/common.sh@33 -- # echo 0 00:04:22.311 20:50:43 -- setup/common.sh@33 -- # return 0 00:04:22.311 20:50:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.311 20:50:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.311 20:50:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.311 20:50:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.311 node0=1024 expecting 1024 00:04:22.311 20:50:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.311 20:50:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.311 00:04:22.311 real 0m1.420s 00:04:22.311 user 0m0.666s 00:04:22.311 sys 0m0.730s 00:04:22.311 20:50:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:22.311 20:50:43 -- common/autotest_common.sh@10 -- # set +x 00:04:22.311 ************************************ 00:04:22.311 END TEST default_setup 00:04:22.311 ************************************ 00:04:22.571 20:50:43 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:22.572 20:50:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.572 20:50:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.572 20:50:43 -- common/autotest_common.sh@10 -- # set +x 00:04:22.572 ************************************ 00:04:22.572 START TEST per_node_1G_alloc 00:04:22.572 ************************************ 00:04:22.572 20:50:43 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:22.572 20:50:43 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:22.572 20:50:43 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:22.572 20:50:43 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:22.572 20:50:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.572 20:50:43 -- setup/hugepages.sh@51 -- # shift 00:04:22.572 20:50:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.572 20:50:43 -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.572 20:50:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.572 20:50:43 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:22.572 20:50:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.572 20:50:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.572 20:50:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.572 20:50:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:22.572 20:50:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.572 20:50:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.572 20:50:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.572 20:50:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.572 20:50:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.572 20:50:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:22.572 20:50:43 -- setup/hugepages.sh@73 -- # return 0 00:04:22.572 20:50:43 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:22.572 20:50:43 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:22.572 20:50:43 -- setup/hugepages.sh@146 -- # setup output 00:04:22.572 20:50:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.572 20:50:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.094 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.094 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.094 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.094 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.094 20:50:43 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:23.094 20:50:43 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:23.094 20:50:43 -- setup/hugepages.sh@89 -- # local node 00:04:23.094 20:50:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.094 20:50:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.094 20:50:43 -- setup/hugepages.sh@92 -- # local surp 00:04:23.094 20:50:43 -- setup/hugepages.sh@93 -- # local resv 00:04:23.094 20:50:43 -- setup/hugepages.sh@94 -- # local anon 00:04:23.094 20:50:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.094 20:50:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.094 20:50:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.094 20:50:43 -- setup/common.sh@18 -- # local node= 00:04:23.094 20:50:43 -- setup/common.sh@19 -- # local var val 00:04:23.094 20:50:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.094 20:50:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.094 20:50:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.094 20:50:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.094 20:50:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.094 20:50:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.094 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.094 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8935648 kB' 'MemAvailable: 10501992 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471740 kB' 'Inactive: 1428908 kB' 'Active(anon): 128872 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119980 kB' 'Mapped: 53564 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158760 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95732 kB' 'KernelStack: 6512 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.095 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.095 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:43 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.096 20:50:44 -- setup/common.sh@33 -- # echo 0 00:04:23.096 20:50:44 -- setup/common.sh@33 -- # return 0 00:04:23.096 20:50:44 -- setup/hugepages.sh@97 -- # anon=0 00:04:23.096 20:50:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.096 20:50:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.096 20:50:44 -- setup/common.sh@18 -- # local node= 00:04:23.096 20:50:44 -- setup/common.sh@19 -- # local var val 00:04:23.096 20:50:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.096 20:50:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.096 20:50:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.096 20:50:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.096 20:50:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.096 20:50:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8937836 kB' 'MemAvailable: 10504180 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471288 kB' 'Inactive: 1428908 kB' 'Active(anon): 128420 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158708 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95680 kB' 'KernelStack: 6448 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.096 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.096 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.097 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.097 20:50:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.098 20:50:44 -- setup/common.sh@33 -- # echo 0 00:04:23.098 20:50:44 -- setup/common.sh@33 -- # return 0 00:04:23.098 20:50:44 -- setup/hugepages.sh@99 -- # surp=0 00:04:23.098 20:50:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.098 20:50:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.098 20:50:44 -- setup/common.sh@18 -- # local node= 00:04:23.098 20:50:44 -- setup/common.sh@19 -- # local var val 00:04:23.098 20:50:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.098 20:50:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.098 20:50:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.098 20:50:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.098 20:50:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.098 20:50:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8937836 kB' 'MemAvailable: 10504180 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471236 kB' 'Inactive: 1428908 kB' 'Active(anon): 128368 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158712 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95684 kB' 'KernelStack: 6448 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.098 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.099 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.100 20:50:44 -- setup/common.sh@33 -- # echo 0 00:04:23.100 20:50:44 -- setup/common.sh@33 -- # return 0 00:04:23.100 20:50:44 -- setup/hugepages.sh@100 -- # resv=0 00:04:23.100 nr_hugepages=512 00:04:23.100 20:50:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:23.100 resv_hugepages=0 00:04:23.100 20:50:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.100 surplus_hugepages=0 00:04:23.100 20:50:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.100 anon_hugepages=0 00:04:23.100 20:50:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.100 20:50:44 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.100 20:50:44 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:23.100 20:50:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.100 20:50:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.100 20:50:44 -- setup/common.sh@18 -- # local node= 00:04:23.100 20:50:44 -- setup/common.sh@19 -- # local var val 00:04:23.100 20:50:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.100 20:50:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.100 20:50:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.100 20:50:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.100 20:50:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.100 20:50:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.100 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8938104 kB' 'MemAvailable: 10504448 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471236 kB' 'Inactive: 1428908 kB' 'Active(anon): 128368 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158712 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95684 kB' 'KernelStack: 6448 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.101 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.102 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.103 20:50:44 -- setup/common.sh@33 -- # echo 512 00:04:23.103 20:50:44 -- setup/common.sh@33 -- # return 0 00:04:23.103 20:50:44 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.103 20:50:44 -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.103 20:50:44 -- setup/hugepages.sh@27 -- # local node 00:04:23.103 20:50:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.103 20:50:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.103 20:50:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.103 20:50:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.103 20:50:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.103 20:50:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.103 20:50:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.103 20:50:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.103 20:50:44 -- setup/common.sh@18 -- # local node=0 00:04:23.103 20:50:44 -- setup/common.sh@19 -- # local var val 00:04:23.103 20:50:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.103 20:50:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.103 20:50:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.103 20:50:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.103 20:50:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.103 20:50:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8938104 kB' 'MemUsed: 3301012 kB' 'SwapCached: 0 kB' 'Active: 471208 kB' 'Inactive: 1428908 kB' 'Active(anon): 128340 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1782268 kB' 'Mapped: 53472 kB' 'AnonPages: 119488 kB' 'Shmem: 10492 kB' 'KernelStack: 6432 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63028 kB' 'Slab: 158716 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.103 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.104 20:50:44 -- setup/common.sh@33 -- # echo 0 00:04:23.104 20:50:44 -- setup/common.sh@33 -- # return 0 00:04:23.104 20:50:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.104 20:50:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.104 20:50:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.104 20:50:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.104 node0=512 expecting 512 00:04:23.104 20:50:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:23.104 20:50:44 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:23.104 00:04:23.104 real 0m0.774s 00:04:23.104 user 0m0.327s 00:04:23.104 sys 0m0.464s 00:04:23.104 20:50:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.104 20:50:44 -- common/autotest_common.sh@10 -- # set +x 00:04:23.104 ************************************ 00:04:23.104 END TEST per_node_1G_alloc 00:04:23.104 ************************************ 00:04:23.364 20:50:44 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:23.364 20:50:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.364 20:50:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.364 20:50:44 -- common/autotest_common.sh@10 -- # set +x 00:04:23.364 ************************************ 00:04:23.364 START TEST even_2G_alloc 00:04:23.364 ************************************ 00:04:23.364 20:50:44 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:23.364 20:50:44 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:23.364 20:50:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.364 20:50:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.364 20:50:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.364 20:50:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.364 20:50:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.364 20:50:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.364 20:50:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.364 20:50:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.364 20:50:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.364 20:50:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.364 20:50:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.364 20:50:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.364 20:50:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.364 20:50:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.364 20:50:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:23.364 20:50:44 -- setup/hugepages.sh@83 -- # : 0 00:04:23.364 20:50:44 -- setup/hugepages.sh@84 -- # : 0 00:04:23.364 20:50:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.364 20:50:44 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:23.364 20:50:44 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:23.364 20:50:44 -- setup/hugepages.sh@153 -- # setup output 00:04:23.364 20:50:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.364 20:50:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.939 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.939 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.939 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.939 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.939 20:50:44 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:23.939 20:50:44 -- setup/hugepages.sh@89 -- # local node 00:04:23.939 20:50:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.939 20:50:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.939 20:50:44 -- setup/hugepages.sh@92 -- # local surp 00:04:23.939 20:50:44 -- setup/hugepages.sh@93 -- # local resv 00:04:23.939 20:50:44 -- setup/hugepages.sh@94 -- # local anon 00:04:23.939 20:50:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.939 20:50:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.939 20:50:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.939 20:50:44 -- setup/common.sh@18 -- # local node= 00:04:23.939 20:50:44 -- setup/common.sh@19 -- # local var val 00:04:23.939 20:50:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.939 20:50:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.939 20:50:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.939 20:50:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.939 20:50:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.939 20:50:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7896608 kB' 'MemAvailable: 9462952 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471784 kB' 'Inactive: 1428908 kB' 'Active(anon): 128916 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 120056 kB' 'Mapped: 53604 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158608 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95580 kB' 'KernelStack: 6444 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.939 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.939 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.940 20:50:44 -- setup/common.sh@33 -- # echo 0 00:04:23.940 20:50:44 -- setup/common.sh@33 -- # return 0 00:04:23.940 20:50:44 -- setup/hugepages.sh@97 -- # anon=0 00:04:23.940 20:50:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.940 20:50:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.940 20:50:44 -- setup/common.sh@18 -- # local node= 00:04:23.940 20:50:44 -- setup/common.sh@19 -- # local var val 00:04:23.940 20:50:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.940 20:50:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.940 20:50:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.940 20:50:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.940 20:50:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.940 20:50:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7896608 kB' 'MemAvailable: 9462952 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471336 kB' 'Inactive: 1428908 kB' 'Active(anon): 128468 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119536 kB' 'Mapped: 53516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158700 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95672 kB' 'KernelStack: 6432 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.940 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.940 20:50:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.941 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.941 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.942 20:50:44 -- setup/common.sh@33 -- # echo 0 00:04:23.942 20:50:44 -- setup/common.sh@33 -- # return 0 00:04:23.942 20:50:44 -- setup/hugepages.sh@99 -- # surp=0 00:04:23.942 20:50:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.942 20:50:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.942 20:50:44 -- setup/common.sh@18 -- # local node= 00:04:23.942 20:50:44 -- setup/common.sh@19 -- # local var val 00:04:23.942 20:50:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.942 20:50:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.942 20:50:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.942 20:50:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.942 20:50:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.942 20:50:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7896912 kB' 'MemAvailable: 9463256 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471280 kB' 'Inactive: 1428908 kB' 'Active(anon): 128412 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119496 kB' 'Mapped: 53516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158684 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95656 kB' 'KernelStack: 6416 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.942 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.942 20:50:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.943 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.943 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.943 20:50:44 -- setup/common.sh@33 -- # echo 0 00:04:23.943 20:50:44 -- setup/common.sh@33 -- # return 0 00:04:23.943 20:50:44 -- setup/hugepages.sh@100 -- # resv=0 00:04:23.943 nr_hugepages=1024 00:04:23.943 20:50:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.943 resv_hugepages=0 00:04:23.943 20:50:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.943 surplus_hugepages=0 00:04:23.943 20:50:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.943 anon_hugepages=0 00:04:23.943 20:50:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.944 20:50:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.944 20:50:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.944 20:50:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.944 20:50:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.944 20:50:44 -- setup/common.sh@18 -- # local node= 00:04:23.944 20:50:44 -- setup/common.sh@19 -- # local var val 00:04:23.944 20:50:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.944 20:50:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.944 20:50:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.944 20:50:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.944 20:50:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.944 20:50:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7897164 kB' 'MemAvailable: 9463508 kB' 'Buffers: 2684 kB' 'Cached: 1779584 kB' 'SwapCached: 0 kB' 'Active: 471256 kB' 'Inactive: 1428908 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 53516 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158684 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95656 kB' 'KernelStack: 6448 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.944 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.944 20:50:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.945 20:50:44 -- setup/common.sh@33 -- # echo 1024 00:04:23.945 20:50:44 -- setup/common.sh@33 -- # return 0 00:04:23.945 20:50:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.945 20:50:44 -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.945 20:50:44 -- setup/hugepages.sh@27 -- # local node 00:04:23.945 20:50:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.945 20:50:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.945 20:50:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.945 20:50:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.945 20:50:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.945 20:50:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.945 20:50:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.945 20:50:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.945 20:50:44 -- setup/common.sh@18 -- # local node=0 00:04:23.945 20:50:44 -- setup/common.sh@19 -- # local var val 00:04:23.945 20:50:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.945 20:50:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.945 20:50:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.945 20:50:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.945 20:50:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.945 20:50:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7896408 kB' 'MemUsed: 4342708 kB' 'SwapCached: 0 kB' 'Active: 471180 kB' 'Inactive: 1428908 kB' 'Active(anon): 128312 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1782268 kB' 'Mapped: 53516 kB' 'AnonPages: 119392 kB' 'Shmem: 10492 kB' 'KernelStack: 6416 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63028 kB' 'Slab: 158684 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.945 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.945 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # continue 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.946 20:50:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.946 20:50:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.946 20:50:44 -- setup/common.sh@33 -- # echo 0 00:04:23.946 20:50:44 -- setup/common.sh@33 -- # return 0 00:04:23.946 20:50:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.946 20:50:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.946 20:50:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.946 20:50:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.946 node0=1024 expecting 1024 00:04:23.946 20:50:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.946 20:50:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.947 00:04:23.947 real 0m0.737s 00:04:23.947 user 0m0.349s 00:04:23.947 sys 0m0.442s 00:04:23.947 20:50:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.947 20:50:44 -- common/autotest_common.sh@10 -- # set +x 00:04:23.947 ************************************ 00:04:23.947 END TEST even_2G_alloc 00:04:23.947 ************************************ 00:04:23.947 20:50:44 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:23.947 20:50:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.947 20:50:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.947 20:50:44 -- common/autotest_common.sh@10 -- # set +x 00:04:23.947 ************************************ 00:04:23.947 START TEST odd_alloc 00:04:23.947 ************************************ 00:04:23.947 20:50:44 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:23.947 20:50:44 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:23.947 20:50:44 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:23.947 20:50:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.947 20:50:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.947 20:50:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:23.947 20:50:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.947 20:50:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.947 20:50:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.947 20:50:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:23.947 20:50:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.947 20:50:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.947 20:50:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.947 20:50:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.947 20:50:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.947 20:50:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.947 20:50:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:23.947 20:50:44 -- setup/hugepages.sh@83 -- # : 0 00:04:23.947 20:50:44 -- setup/hugepages.sh@84 -- # : 0 00:04:23.947 20:50:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.947 20:50:44 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:23.947 20:50:44 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:23.947 20:50:44 -- setup/hugepages.sh@160 -- # setup output 00:04:23.947 20:50:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.947 20:50:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.524 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.524 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.524 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.524 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.524 20:50:45 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:24.524 20:50:45 -- setup/hugepages.sh@89 -- # local node 00:04:24.524 20:50:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.524 20:50:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.524 20:50:45 -- setup/hugepages.sh@92 -- # local surp 00:04:24.524 20:50:45 -- setup/hugepages.sh@93 -- # local resv 00:04:24.524 20:50:45 -- setup/hugepages.sh@94 -- # local anon 00:04:24.524 20:50:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.524 20:50:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.524 20:50:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.524 20:50:45 -- setup/common.sh@18 -- # local node= 00:04:24.524 20:50:45 -- setup/common.sh@19 -- # local var val 00:04:24.524 20:50:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.524 20:50:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.524 20:50:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.524 20:50:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.524 20:50:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.524 20:50:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.524 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.524 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.524 20:50:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7904116 kB' 'MemAvailable: 9470464 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 471612 kB' 'Inactive: 1428912 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 53604 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158624 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95596 kB' 'KernelStack: 6484 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:24.524 20:50:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.524 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.524 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.524 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.524 20:50:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.524 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.524 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.524 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.524 20:50:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.524 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.524 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.524 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.524 20:50:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.524 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 20:50:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.788 20:50:45 -- setup/common.sh@33 -- # echo 0 00:04:24.788 20:50:45 -- setup/common.sh@33 -- # return 0 00:04:24.788 20:50:45 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.788 20:50:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.788 20:50:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.788 20:50:45 -- setup/common.sh@18 -- # local node= 00:04:24.788 20:50:45 -- setup/common.sh@19 -- # local var val 00:04:24.788 20:50:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.788 20:50:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.788 20:50:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.788 20:50:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.788 20:50:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.788 20:50:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7903864 kB' 'MemAvailable: 9470212 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 471292 kB' 'Inactive: 1428912 kB' 'Active(anon): 128424 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158644 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95616 kB' 'KernelStack: 6448 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 20:50:45 -- setup/common.sh@33 -- # echo 0 00:04:24.789 20:50:45 -- setup/common.sh@33 -- # return 0 00:04:24.789 20:50:45 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.789 20:50:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.789 20:50:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.789 20:50:45 -- setup/common.sh@18 -- # local node= 00:04:24.789 20:50:45 -- setup/common.sh@19 -- # local var val 00:04:24.789 20:50:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.789 20:50:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.789 20:50:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.789 20:50:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.789 20:50:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.789 20:50:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.789 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7903864 kB' 'MemAvailable: 9470212 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 471264 kB' 'Inactive: 1428912 kB' 'Active(anon): 128396 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158640 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95612 kB' 'KernelStack: 6448 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 20:50:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 20:50:45 -- setup/common.sh@33 -- # echo 0 00:04:24.791 20:50:45 -- setup/common.sh@33 -- # return 0 00:04:24.791 20:50:45 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.791 nr_hugepages=1025 00:04:24.791 20:50:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:24.791 resv_hugepages=0 00:04:24.791 20:50:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.791 surplus_hugepages=0 00:04:24.791 20:50:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.791 anon_hugepages=0 00:04:24.791 20:50:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.791 20:50:45 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.791 20:50:45 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:24.791 20:50:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.791 20:50:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.791 20:50:45 -- setup/common.sh@18 -- # local node= 00:04:24.791 20:50:45 -- setup/common.sh@19 -- # local var val 00:04:24.791 20:50:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.791 20:50:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.791 20:50:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.791 20:50:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.791 20:50:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.791 20:50:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7903864 kB' 'MemAvailable: 9470212 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 471260 kB' 'Inactive: 1428912 kB' 'Active(anon): 128392 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119496 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158628 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95600 kB' 'KernelStack: 6416 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.793 20:50:45 -- setup/common.sh@33 -- # echo 1025 00:04:24.793 20:50:45 -- setup/common.sh@33 -- # return 0 00:04:24.793 20:50:45 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.793 20:50:45 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.793 20:50:45 -- setup/hugepages.sh@27 -- # local node 00:04:24.793 20:50:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.793 20:50:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:24.793 20:50:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.793 20:50:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.793 20:50:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.793 20:50:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.793 20:50:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.793 20:50:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.793 20:50:45 -- setup/common.sh@18 -- # local node=0 00:04:24.793 20:50:45 -- setup/common.sh@19 -- # local var val 00:04:24.793 20:50:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.793 20:50:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.793 20:50:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.793 20:50:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.793 20:50:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.793 20:50:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7903864 kB' 'MemUsed: 4335252 kB' 'SwapCached: 0 kB' 'Active: 471264 kB' 'Inactive: 1428912 kB' 'Active(anon): 128396 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1782272 kB' 'Mapped: 53472 kB' 'AnonPages: 119536 kB' 'Shmem: 10492 kB' 'KernelStack: 6448 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63028 kB' 'Slab: 158624 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 20:50:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # continue 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 20:50:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 20:50:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 20:50:45 -- setup/common.sh@33 -- # echo 0 00:04:24.794 20:50:45 -- setup/common.sh@33 -- # return 0 00:04:24.794 20:50:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.794 20:50:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.794 20:50:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.794 20:50:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.794 node0=1025 expecting 1025 00:04:24.794 20:50:45 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:24.794 20:50:45 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:24.794 00:04:24.794 real 0m0.767s 00:04:24.794 user 0m0.331s 00:04:24.794 sys 0m0.458s 00:04:24.794 20:50:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.794 20:50:45 -- common/autotest_common.sh@10 -- # set +x 00:04:24.794 ************************************ 00:04:24.794 END TEST odd_alloc 00:04:24.794 ************************************ 00:04:24.794 20:50:45 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:24.794 20:50:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.794 20:50:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.794 20:50:45 -- common/autotest_common.sh@10 -- # set +x 00:04:24.794 ************************************ 00:04:24.794 START TEST custom_alloc 00:04:24.794 ************************************ 00:04:24.794 20:50:45 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:24.794 20:50:45 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:24.794 20:50:45 -- setup/hugepages.sh@169 -- # local node 00:04:24.794 20:50:45 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:24.794 20:50:45 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:24.794 20:50:45 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:24.794 20:50:45 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:24.794 20:50:45 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.794 20:50:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.794 20:50:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.794 20:50:45 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.794 20:50:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.794 20:50:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.794 20:50:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.794 20:50:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.794 20:50:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.794 20:50:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.794 20:50:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.794 20:50:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.794 20:50:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.794 20:50:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.795 20:50:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:24.795 20:50:45 -- setup/hugepages.sh@83 -- # : 0 00:04:24.795 20:50:45 -- setup/hugepages.sh@84 -- # : 0 00:04:24.795 20:50:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.795 20:50:45 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:24.795 20:50:45 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:24.795 20:50:45 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:24.795 20:50:45 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:24.795 20:50:45 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:24.795 20:50:45 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:24.795 20:50:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.795 20:50:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.795 20:50:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.795 20:50:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.795 20:50:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.795 20:50:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.795 20:50:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.795 20:50:45 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:24.795 20:50:45 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.795 20:50:45 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:24.795 20:50:45 -- setup/hugepages.sh@78 -- # return 0 00:04:24.795 20:50:45 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:24.795 20:50:45 -- setup/hugepages.sh@187 -- # setup output 00:04:24.795 20:50:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.795 20:50:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.362 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.362 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.362 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.362 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.362 20:50:46 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:25.362 20:50:46 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:25.362 20:50:46 -- setup/hugepages.sh@89 -- # local node 00:04:25.362 20:50:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.362 20:50:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.362 20:50:46 -- setup/hugepages.sh@92 -- # local surp 00:04:25.362 20:50:46 -- setup/hugepages.sh@93 -- # local resv 00:04:25.362 20:50:46 -- setup/hugepages.sh@94 -- # local anon 00:04:25.362 20:50:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.362 20:50:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.362 20:50:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.362 20:50:46 -- setup/common.sh@18 -- # local node= 00:04:25.362 20:50:46 -- setup/common.sh@19 -- # local var val 00:04:25.362 20:50:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.362 20:50:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.362 20:50:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.362 20:50:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.362 20:50:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.362 20:50:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.362 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.362 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.362 20:50:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8956768 kB' 'MemAvailable: 10523116 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 471656 kB' 'Inactive: 1428912 kB' 'Active(anon): 128788 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 53572 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158648 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95620 kB' 'KernelStack: 6428 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:25.362 20:50:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.363 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.363 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.364 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.364 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.364 20:50:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.364 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.364 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.364 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.364 20:50:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.364 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.364 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.364 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.364 20:50:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.364 20:50:46 -- setup/common.sh@33 -- # echo 0 00:04:25.364 20:50:46 -- setup/common.sh@33 -- # return 0 00:04:25.364 20:50:46 -- setup/hugepages.sh@97 -- # anon=0 00:04:25.364 20:50:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.364 20:50:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.364 20:50:46 -- setup/common.sh@18 -- # local node= 00:04:25.364 20:50:46 -- setup/common.sh@19 -- # local var val 00:04:25.364 20:50:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.364 20:50:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.364 20:50:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.364 20:50:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.364 20:50:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.364 20:50:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.626 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.626 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8956676 kB' 'MemAvailable: 10523024 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 471236 kB' 'Inactive: 1428912 kB' 'Active(anon): 128368 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119412 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158680 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95652 kB' 'KernelStack: 6416 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.627 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.627 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.628 20:50:46 -- setup/common.sh@33 -- # echo 0 00:04:25.628 20:50:46 -- setup/common.sh@33 -- # return 0 00:04:25.628 20:50:46 -- setup/hugepages.sh@99 -- # surp=0 00:04:25.628 20:50:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.628 20:50:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.628 20:50:46 -- setup/common.sh@18 -- # local node= 00:04:25.628 20:50:46 -- setup/common.sh@19 -- # local var val 00:04:25.628 20:50:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.628 20:50:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.628 20:50:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.628 20:50:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.628 20:50:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.628 20:50:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8956676 kB' 'MemAvailable: 10523024 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 471236 kB' 'Inactive: 1428912 kB' 'Active(anon): 128368 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119412 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158680 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95652 kB' 'KernelStack: 6416 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.628 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.628 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.629 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.629 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.630 20:50:46 -- setup/common.sh@33 -- # echo 0 00:04:25.630 20:50:46 -- setup/common.sh@33 -- # return 0 00:04:25.630 20:50:46 -- setup/hugepages.sh@100 -- # resv=0 00:04:25.630 nr_hugepages=512 00:04:25.630 20:50:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:25.630 resv_hugepages=0 00:04:25.630 20:50:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.630 surplus_hugepages=0 00:04:25.630 20:50:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.630 anon_hugepages=0 00:04:25.630 20:50:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.630 20:50:46 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:25.630 20:50:46 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:25.630 20:50:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.630 20:50:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.630 20:50:46 -- setup/common.sh@18 -- # local node= 00:04:25.630 20:50:46 -- setup/common.sh@19 -- # local var val 00:04:25.630 20:50:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.630 20:50:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.630 20:50:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.630 20:50:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.630 20:50:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.630 20:50:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8956676 kB' 'MemAvailable: 10523024 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 470940 kB' 'Inactive: 1428912 kB' 'Active(anon): 128072 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119376 kB' 'Mapped: 53472 kB' 'Shmem: 10492 kB' 'KReclaimable: 63028 kB' 'Slab: 158676 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95648 kB' 'KernelStack: 6400 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.630 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.630 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.631 20:50:46 -- setup/common.sh@33 -- # echo 512 00:04:25.631 20:50:46 -- setup/common.sh@33 -- # return 0 00:04:25.631 20:50:46 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:25.631 20:50:46 -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.631 20:50:46 -- setup/hugepages.sh@27 -- # local node 00:04:25.631 20:50:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.631 20:50:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.631 20:50:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.631 20:50:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.631 20:50:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.631 20:50:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.631 20:50:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.631 20:50:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.631 20:50:46 -- setup/common.sh@18 -- # local node=0 00:04:25.631 20:50:46 -- setup/common.sh@19 -- # local var val 00:04:25.631 20:50:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.631 20:50:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.631 20:50:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.631 20:50:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.631 20:50:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.631 20:50:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.631 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.631 20:50:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8956676 kB' 'MemUsed: 3282440 kB' 'SwapCached: 0 kB' 'Active: 471120 kB' 'Inactive: 1428912 kB' 'Active(anon): 128252 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1782272 kB' 'Mapped: 53472 kB' 'AnonPages: 119296 kB' 'Shmem: 10492 kB' 'KernelStack: 6436 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63028 kB' 'Slab: 158668 kB' 'SReclaimable: 63028 kB' 'SUnreclaim: 95640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.631 20:50:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.632 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.632 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.633 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.633 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.633 20:50:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.633 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.633 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.633 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.633 20:50:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.633 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.633 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.633 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.633 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.633 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.633 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.633 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.633 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.633 20:50:46 -- setup/common.sh@32 -- # continue 00:04:25.633 20:50:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.633 20:50:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.633 20:50:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.633 20:50:46 -- setup/common.sh@33 -- # echo 0 00:04:25.633 20:50:46 -- setup/common.sh@33 -- # return 0 00:04:25.633 20:50:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.633 20:50:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.633 20:50:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.633 20:50:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.633 20:50:46 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:25.633 node0=512 expecting 512 00:04:25.633 20:50:46 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:25.633 00:04:25.633 real 0m0.745s 00:04:25.633 user 0m0.360s 00:04:25.633 sys 0m0.434s 00:04:25.633 20:50:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.633 20:50:46 -- common/autotest_common.sh@10 -- # set +x 00:04:25.633 ************************************ 00:04:25.633 END TEST custom_alloc 00:04:25.633 ************************************ 00:04:25.633 20:50:46 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:25.633 20:50:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.633 20:50:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.633 20:50:46 -- common/autotest_common.sh@10 -- # set +x 00:04:25.633 ************************************ 00:04:25.633 START TEST no_shrink_alloc 00:04:25.633 ************************************ 00:04:25.633 20:50:46 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:25.633 20:50:46 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:25.633 20:50:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.633 20:50:46 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:25.633 20:50:46 -- setup/hugepages.sh@51 -- # shift 00:04:25.633 20:50:46 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:25.633 20:50:46 -- setup/hugepages.sh@52 -- # local node_ids 00:04:25.633 20:50:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.633 20:50:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.633 20:50:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:25.633 20:50:46 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:25.633 20:50:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.633 20:50:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.633 20:50:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:25.633 20:50:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.633 20:50:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.633 20:50:46 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:25.633 20:50:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:25.633 20:50:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:25.633 20:50:46 -- setup/hugepages.sh@73 -- # return 0 00:04:25.633 20:50:46 -- setup/hugepages.sh@198 -- # setup output 00:04:25.633 20:50:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.633 20:50:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.206 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.206 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.206 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.206 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.206 20:50:47 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:26.206 20:50:47 -- setup/hugepages.sh@89 -- # local node 00:04:26.206 20:50:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.206 20:50:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.206 20:50:47 -- setup/hugepages.sh@92 -- # local surp 00:04:26.206 20:50:47 -- setup/hugepages.sh@93 -- # local resv 00:04:26.206 20:50:47 -- setup/hugepages.sh@94 -- # local anon 00:04:26.206 20:50:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.206 20:50:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.206 20:50:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.206 20:50:47 -- setup/common.sh@18 -- # local node= 00:04:26.206 20:50:47 -- setup/common.sh@19 -- # local var val 00:04:26.206 20:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.206 20:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.206 20:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.206 20:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.206 20:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.206 20:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7907704 kB' 'MemAvailable: 9474048 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 469172 kB' 'Inactive: 1428912 kB' 'Active(anon): 126304 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117432 kB' 'Mapped: 52696 kB' 'Shmem: 10492 kB' 'KReclaimable: 63020 kB' 'Slab: 158436 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 95416 kB' 'KernelStack: 6316 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.206 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.206 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.207 20:50:47 -- setup/common.sh@33 -- # echo 0 00:04:26.207 20:50:47 -- setup/common.sh@33 -- # return 0 00:04:26.207 20:50:47 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.207 20:50:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.207 20:50:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.207 20:50:47 -- setup/common.sh@18 -- # local node= 00:04:26.207 20:50:47 -- setup/common.sh@19 -- # local var val 00:04:26.207 20:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.207 20:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.207 20:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.207 20:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.207 20:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.207 20:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7907452 kB' 'MemAvailable: 9473796 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 468996 kB' 'Inactive: 1428912 kB' 'Active(anon): 126128 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117244 kB' 'Mapped: 52624 kB' 'Shmem: 10492 kB' 'KReclaimable: 63020 kB' 'Slab: 158428 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 95408 kB' 'KernelStack: 6352 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 20:50:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.209 20:50:47 -- setup/common.sh@33 -- # echo 0 00:04:26.209 20:50:47 -- setup/common.sh@33 -- # return 0 00:04:26.209 20:50:47 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.209 20:50:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.209 20:50:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.209 20:50:47 -- setup/common.sh@18 -- # local node= 00:04:26.209 20:50:47 -- setup/common.sh@19 -- # local var val 00:04:26.209 20:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.209 20:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.209 20:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.209 20:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.209 20:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.209 20:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7907452 kB' 'MemAvailable: 9473796 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 468736 kB' 'Inactive: 1428912 kB' 'Active(anon): 125868 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116948 kB' 'Mapped: 52624 kB' 'Shmem: 10492 kB' 'KReclaimable: 63020 kB' 'Slab: 158428 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 95408 kB' 'KernelStack: 6336 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.209 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.210 20:50:47 -- setup/common.sh@33 -- # echo 0 00:04:26.210 20:50:47 -- setup/common.sh@33 -- # return 0 00:04:26.210 20:50:47 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.210 nr_hugepages=1024 00:04:26.210 20:50:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.210 resv_hugepages=0 00:04:26.210 20:50:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.210 surplus_hugepages=0 00:04:26.210 20:50:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.210 anon_hugepages=0 00:04:26.210 20:50:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.210 20:50:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.210 20:50:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.210 20:50:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.210 20:50:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.210 20:50:47 -- setup/common.sh@18 -- # local node= 00:04:26.210 20:50:47 -- setup/common.sh@19 -- # local var val 00:04:26.210 20:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.210 20:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.210 20:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.210 20:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.210 20:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.210 20:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7907452 kB' 'MemAvailable: 9473796 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 468948 kB' 'Inactive: 1428912 kB' 'Active(anon): 126080 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117160 kB' 'Mapped: 52624 kB' 'Shmem: 10492 kB' 'KReclaimable: 63020 kB' 'Slab: 158428 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 95408 kB' 'KernelStack: 6320 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.210 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.210 20:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.472 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.472 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.473 20:50:47 -- setup/common.sh@33 -- # echo 1024 00:04:26.473 20:50:47 -- setup/common.sh@33 -- # return 0 00:04:26.473 20:50:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.473 20:50:47 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.473 20:50:47 -- setup/hugepages.sh@27 -- # local node 00:04:26.473 20:50:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.473 20:50:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.473 20:50:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.473 20:50:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.473 20:50:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.473 20:50:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.473 20:50:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.473 20:50:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.473 20:50:47 -- setup/common.sh@18 -- # local node=0 00:04:26.473 20:50:47 -- setup/common.sh@19 -- # local var val 00:04:26.473 20:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.473 20:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.473 20:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.473 20:50:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.473 20:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.473 20:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7907452 kB' 'MemUsed: 4331664 kB' 'SwapCached: 0 kB' 'Active: 468876 kB' 'Inactive: 1428912 kB' 'Active(anon): 126008 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1782272 kB' 'Mapped: 52624 kB' 'AnonPages: 117084 kB' 'Shmem: 10492 kB' 'KernelStack: 6304 kB' 'PageTables: 3780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63020 kB' 'Slab: 158428 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 95408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.473 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.473 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.474 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.474 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.474 20:50:47 -- setup/common.sh@33 -- # echo 0 00:04:26.474 20:50:47 -- setup/common.sh@33 -- # return 0 00:04:26.474 20:50:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.474 20:50:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.474 20:50:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.474 20:50:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.474 node0=1024 expecting 1024 00:04:26.474 20:50:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.474 20:50:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.474 20:50:47 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:26.474 20:50:47 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:26.474 20:50:47 -- setup/hugepages.sh@202 -- # setup output 00:04:26.474 20:50:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.474 20:50:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.997 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.997 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.997 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.997 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.997 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:26.997 20:50:47 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:26.997 20:50:47 -- setup/hugepages.sh@89 -- # local node 00:04:26.997 20:50:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.997 20:50:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.997 20:50:47 -- setup/hugepages.sh@92 -- # local surp 00:04:26.997 20:50:47 -- setup/hugepages.sh@93 -- # local resv 00:04:26.997 20:50:47 -- setup/hugepages.sh@94 -- # local anon 00:04:26.997 20:50:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.997 20:50:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.997 20:50:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.997 20:50:47 -- setup/common.sh@18 -- # local node= 00:04:26.997 20:50:47 -- setup/common.sh@19 -- # local var val 00:04:26.997 20:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.997 20:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.997 20:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.997 20:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.997 20:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.997 20:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7905104 kB' 'MemAvailable: 9471448 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 469964 kB' 'Inactive: 1428912 kB' 'Active(anon): 127096 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118228 kB' 'Mapped: 52652 kB' 'Shmem: 10492 kB' 'KReclaimable: 63020 kB' 'Slab: 158420 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 95400 kB' 'KernelStack: 6460 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.997 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.997 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.998 20:50:47 -- setup/common.sh@33 -- # echo 0 00:04:26.998 20:50:47 -- setup/common.sh@33 -- # return 0 00:04:26.998 20:50:47 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.998 20:50:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.998 20:50:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.998 20:50:47 -- setup/common.sh@18 -- # local node= 00:04:26.998 20:50:47 -- setup/common.sh@19 -- # local var val 00:04:26.998 20:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.998 20:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.998 20:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.998 20:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.998 20:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.998 20:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7905264 kB' 'MemAvailable: 9471608 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 469328 kB' 'Inactive: 1428912 kB' 'Active(anon): 126460 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117636 kB' 'Mapped: 52708 kB' 'Shmem: 10492 kB' 'KReclaimable: 63020 kB' 'Slab: 158420 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 95400 kB' 'KernelStack: 6364 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.998 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.998 20:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # continue 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.999 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.999 20:50:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.000 20:50:47 -- setup/common.sh@33 -- # echo 0 00:04:27.000 20:50:47 -- setup/common.sh@33 -- # return 0 00:04:27.000 20:50:47 -- setup/hugepages.sh@99 -- # surp=0 00:04:27.000 20:50:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.000 20:50:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.000 20:50:47 -- setup/common.sh@18 -- # local node= 00:04:27.000 20:50:47 -- setup/common.sh@19 -- # local var val 00:04:27.000 20:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.000 20:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.000 20:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.000 20:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.000 20:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.000 20:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7905264 kB' 'MemAvailable: 9471608 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 469316 kB' 'Inactive: 1428912 kB' 'Active(anon): 126448 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117572 kB' 'Mapped: 52572 kB' 'Shmem: 10492 kB' 'KReclaimable: 63020 kB' 'Slab: 158424 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 95404 kB' 'KernelStack: 6368 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.000 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.000 20:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.001 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.001 20:50:47 -- setup/common.sh@33 -- # echo 0 00:04:27.001 20:50:47 -- setup/common.sh@33 -- # return 0 00:04:27.001 20:50:47 -- setup/hugepages.sh@100 -- # resv=0 00:04:27.001 nr_hugepages=1024 00:04:27.001 20:50:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:27.001 resv_hugepages=0 00:04:27.001 20:50:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.001 surplus_hugepages=0 00:04:27.001 20:50:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.001 anon_hugepages=0 00:04:27.001 20:50:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.001 20:50:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.001 20:50:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:27.001 20:50:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.001 20:50:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.001 20:50:47 -- setup/common.sh@18 -- # local node= 00:04:27.001 20:50:47 -- setup/common.sh@19 -- # local var val 00:04:27.001 20:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.001 20:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.001 20:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.001 20:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.001 20:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.001 20:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.001 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7907080 kB' 'MemAvailable: 9473424 kB' 'Buffers: 2684 kB' 'Cached: 1779588 kB' 'SwapCached: 0 kB' 'Active: 468972 kB' 'Inactive: 1428912 kB' 'Active(anon): 126104 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117188 kB' 'Mapped: 52624 kB' 'Shmem: 10492 kB' 'KReclaimable: 63020 kB' 'Slab: 158420 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 95400 kB' 'KernelStack: 6336 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 5060608 kB' 'DirectMap1G: 9437184 kB' 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.002 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.002 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.003 20:50:47 -- setup/common.sh@33 -- # echo 1024 00:04:27.003 20:50:47 -- setup/common.sh@33 -- # return 0 00:04:27.003 20:50:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.003 20:50:47 -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.003 20:50:47 -- setup/hugepages.sh@27 -- # local node 00:04:27.003 20:50:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.003 20:50:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:27.003 20:50:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.003 20:50:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.003 20:50:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.003 20:50:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.003 20:50:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.003 20:50:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.003 20:50:47 -- setup/common.sh@18 -- # local node=0 00:04:27.003 20:50:47 -- setup/common.sh@19 -- # local var val 00:04:27.003 20:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.003 20:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.003 20:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.003 20:50:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.003 20:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.003 20:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7907080 kB' 'MemUsed: 4332036 kB' 'SwapCached: 0 kB' 'Active: 469004 kB' 'Inactive: 1428912 kB' 'Active(anon): 126136 kB' 'Inactive(anon): 0 kB' 'Active(file): 342868 kB' 'Inactive(file): 1428912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1782272 kB' 'Mapped: 52624 kB' 'AnonPages: 117220 kB' 'Shmem: 10492 kB' 'KernelStack: 6336 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63020 kB' 'Slab: 158420 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 95400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.003 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.003 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # continue 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.004 20:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.004 20:50:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.004 20:50:47 -- setup/common.sh@33 -- # echo 0 00:04:27.004 20:50:47 -- setup/common.sh@33 -- # return 0 00:04:27.004 20:50:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.004 20:50:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.004 20:50:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.004 20:50:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.004 node0=1024 expecting 1024 00:04:27.004 20:50:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:27.004 20:50:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:27.004 00:04:27.004 real 0m1.411s 00:04:27.004 user 0m0.669s 00:04:27.004 sys 0m0.843s 00:04:27.004 20:50:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:27.004 20:50:47 -- common/autotest_common.sh@10 -- # set +x 00:04:27.004 ************************************ 00:04:27.004 END TEST no_shrink_alloc 00:04:27.004 ************************************ 00:04:27.004 20:50:48 -- setup/hugepages.sh@217 -- # clear_hp 00:04:27.004 20:50:48 -- setup/hugepages.sh@37 -- # local node hp 00:04:27.004 20:50:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.004 20:50:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.004 20:50:48 -- setup/hugepages.sh@41 -- # echo 0 00:04:27.004 20:50:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.004 20:50:48 -- setup/hugepages.sh@41 -- # echo 0 00:04:27.264 20:50:48 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:27.264 20:50:48 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:27.264 ************************************ 00:04:27.264 END TEST hugepages 00:04:27.264 ************************************ 00:04:27.264 00:04:27.264 real 0m6.410s 00:04:27.264 user 0m2.940s 00:04:27.264 sys 0m3.670s 00:04:27.264 20:50:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:27.264 20:50:48 -- common/autotest_common.sh@10 -- # set +x 00:04:27.264 20:50:48 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:27.264 20:50:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.264 20:50:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.264 20:50:48 -- common/autotest_common.sh@10 -- # set +x 00:04:27.264 ************************************ 00:04:27.264 START TEST driver 00:04:27.264 ************************************ 00:04:27.264 20:50:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:27.264 * Looking for test storage... 00:04:27.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:27.264 20:50:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:27.264 20:50:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:27.264 20:50:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:27.264 20:50:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:27.264 20:50:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:27.264 20:50:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:27.264 20:50:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:27.264 20:50:48 -- scripts/common.sh@335 -- # IFS=.-: 00:04:27.264 20:50:48 -- scripts/common.sh@335 -- # read -ra ver1 00:04:27.264 20:50:48 -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.264 20:50:48 -- scripts/common.sh@336 -- # read -ra ver2 00:04:27.264 20:50:48 -- scripts/common.sh@337 -- # local 'op=<' 00:04:27.264 20:50:48 -- scripts/common.sh@339 -- # ver1_l=2 00:04:27.264 20:50:48 -- scripts/common.sh@340 -- # ver2_l=1 00:04:27.264 20:50:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:27.264 20:50:48 -- scripts/common.sh@343 -- # case "$op" in 00:04:27.264 20:50:48 -- scripts/common.sh@344 -- # : 1 00:04:27.264 20:50:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:27.264 20:50:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.264 20:50:48 -- scripts/common.sh@364 -- # decimal 1 00:04:27.264 20:50:48 -- scripts/common.sh@352 -- # local d=1 00:04:27.264 20:50:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.264 20:50:48 -- scripts/common.sh@354 -- # echo 1 00:04:27.264 20:50:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:27.264 20:50:48 -- scripts/common.sh@365 -- # decimal 2 00:04:27.264 20:50:48 -- scripts/common.sh@352 -- # local d=2 00:04:27.264 20:50:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.264 20:50:48 -- scripts/common.sh@354 -- # echo 2 00:04:27.264 20:50:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:27.264 20:50:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:27.264 20:50:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:27.264 20:50:48 -- scripts/common.sh@367 -- # return 0 00:04:27.264 20:50:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.264 20:50:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:27.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.264 --rc genhtml_branch_coverage=1 00:04:27.264 --rc genhtml_function_coverage=1 00:04:27.264 --rc genhtml_legend=1 00:04:27.264 --rc geninfo_all_blocks=1 00:04:27.264 --rc geninfo_unexecuted_blocks=1 00:04:27.264 00:04:27.264 ' 00:04:27.264 20:50:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:27.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.264 --rc genhtml_branch_coverage=1 00:04:27.264 --rc genhtml_function_coverage=1 00:04:27.264 --rc genhtml_legend=1 00:04:27.264 --rc geninfo_all_blocks=1 00:04:27.264 --rc geninfo_unexecuted_blocks=1 00:04:27.264 00:04:27.264 ' 00:04:27.264 20:50:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:27.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.264 --rc genhtml_branch_coverage=1 00:04:27.264 --rc genhtml_function_coverage=1 00:04:27.264 --rc genhtml_legend=1 00:04:27.264 --rc geninfo_all_blocks=1 00:04:27.264 --rc geninfo_unexecuted_blocks=1 00:04:27.264 00:04:27.264 ' 00:04:27.264 20:50:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:27.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.264 --rc genhtml_branch_coverage=1 00:04:27.264 --rc genhtml_function_coverage=1 00:04:27.264 --rc genhtml_legend=1 00:04:27.264 --rc geninfo_all_blocks=1 00:04:27.264 --rc geninfo_unexecuted_blocks=1 00:04:27.264 00:04:27.264 ' 00:04:27.265 20:50:48 -- setup/driver.sh@68 -- # setup reset 00:04:27.265 20:50:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.265 20:50:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.834 20:50:54 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:33.834 20:50:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.834 20:50:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.834 20:50:54 -- common/autotest_common.sh@10 -- # set +x 00:04:33.834 ************************************ 00:04:33.834 START TEST guess_driver 00:04:33.834 ************************************ 00:04:33.834 20:50:54 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:33.834 20:50:54 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:33.834 20:50:54 -- setup/driver.sh@47 -- # local fail=0 00:04:33.834 20:50:54 -- setup/driver.sh@49 -- # pick_driver 00:04:33.834 20:50:54 -- setup/driver.sh@36 -- # vfio 00:04:33.834 20:50:54 -- setup/driver.sh@21 -- # local iommu_grups 00:04:33.834 20:50:54 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:33.834 20:50:54 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:33.834 20:50:54 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:33.834 20:50:54 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:33.834 20:50:54 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:33.834 20:50:54 -- setup/driver.sh@32 -- # return 1 00:04:33.834 20:50:54 -- setup/driver.sh@38 -- # uio 00:04:33.834 20:50:54 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:33.834 20:50:54 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:33.834 20:50:54 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:33.834 20:50:54 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:33.834 20:50:54 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:33.834 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:33.834 20:50:54 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:33.835 20:50:54 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:33.835 20:50:54 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:33.835 Looking for driver=uio_pci_generic 00:04:33.835 20:50:54 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:33.835 20:50:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.835 20:50:54 -- setup/driver.sh@45 -- # setup output config 00:04:33.835 20:50:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.835 20:50:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:34.401 20:50:55 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:34.401 20:50:55 -- setup/driver.sh@58 -- # continue 00:04:34.401 20:50:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.401 20:50:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:34.401 20:50:55 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:34.401 20:50:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.401 20:50:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:34.401 20:50:55 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:34.401 20:50:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.401 20:50:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:34.401 20:50:55 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:34.401 20:50:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.659 20:50:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:34.659 20:50:55 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:34.659 20:50:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.659 20:50:55 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:34.659 20:50:55 -- setup/driver.sh@65 -- # setup reset 00:04:34.659 20:50:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.659 20:50:55 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.229 00:04:41.229 real 0m7.259s 00:04:41.229 user 0m0.921s 00:04:41.229 sys 0m1.484s 00:04:41.229 20:51:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.229 20:51:01 -- common/autotest_common.sh@10 -- # set +x 00:04:41.229 ************************************ 00:04:41.229 END TEST guess_driver 00:04:41.229 ************************************ 00:04:41.229 00:04:41.229 real 0m13.453s 00:04:41.229 user 0m1.372s 00:04:41.229 sys 0m2.386s 00:04:41.229 20:51:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.229 20:51:01 -- common/autotest_common.sh@10 -- # set +x 00:04:41.229 ************************************ 00:04:41.229 END TEST driver 00:04:41.229 ************************************ 00:04:41.229 20:51:01 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:41.229 20:51:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.229 20:51:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.229 20:51:01 -- common/autotest_common.sh@10 -- # set +x 00:04:41.229 ************************************ 00:04:41.229 START TEST devices 00:04:41.229 ************************************ 00:04:41.229 20:51:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:41.229 * Looking for test storage... 00:04:41.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:41.229 20:51:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.229 20:51:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.230 20:51:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.230 20:51:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.230 20:51:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.230 20:51:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.230 20:51:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.230 20:51:01 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.230 20:51:01 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.230 20:51:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.230 20:51:01 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.230 20:51:01 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.230 20:51:01 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.230 20:51:01 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.230 20:51:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.230 20:51:01 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.230 20:51:01 -- scripts/common.sh@344 -- # : 1 00:04:41.230 20:51:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.230 20:51:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.230 20:51:01 -- scripts/common.sh@364 -- # decimal 1 00:04:41.230 20:51:01 -- scripts/common.sh@352 -- # local d=1 00:04:41.230 20:51:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.230 20:51:01 -- scripts/common.sh@354 -- # echo 1 00:04:41.230 20:51:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.230 20:51:01 -- scripts/common.sh@365 -- # decimal 2 00:04:41.230 20:51:01 -- scripts/common.sh@352 -- # local d=2 00:04:41.230 20:51:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.230 20:51:01 -- scripts/common.sh@354 -- # echo 2 00:04:41.230 20:51:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.230 20:51:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.230 20:51:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.230 20:51:01 -- scripts/common.sh@367 -- # return 0 00:04:41.230 20:51:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.230 20:51:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.230 --rc genhtml_branch_coverage=1 00:04:41.230 --rc genhtml_function_coverage=1 00:04:41.230 --rc genhtml_legend=1 00:04:41.230 --rc geninfo_all_blocks=1 00:04:41.230 --rc geninfo_unexecuted_blocks=1 00:04:41.230 00:04:41.230 ' 00:04:41.230 20:51:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.230 --rc genhtml_branch_coverage=1 00:04:41.230 --rc genhtml_function_coverage=1 00:04:41.230 --rc genhtml_legend=1 00:04:41.230 --rc geninfo_all_blocks=1 00:04:41.230 --rc geninfo_unexecuted_blocks=1 00:04:41.230 00:04:41.230 ' 00:04:41.230 20:51:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.230 --rc genhtml_branch_coverage=1 00:04:41.230 --rc genhtml_function_coverage=1 00:04:41.230 --rc genhtml_legend=1 00:04:41.230 --rc geninfo_all_blocks=1 00:04:41.230 --rc geninfo_unexecuted_blocks=1 00:04:41.230 00:04:41.230 ' 00:04:41.230 20:51:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.230 --rc genhtml_branch_coverage=1 00:04:41.230 --rc genhtml_function_coverage=1 00:04:41.230 --rc genhtml_legend=1 00:04:41.230 --rc geninfo_all_blocks=1 00:04:41.230 --rc geninfo_unexecuted_blocks=1 00:04:41.230 00:04:41.230 ' 00:04:41.230 20:51:01 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:41.230 20:51:01 -- setup/devices.sh@192 -- # setup reset 00:04:41.230 20:51:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.230 20:51:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.169 20:51:02 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:42.169 20:51:02 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:42.169 20:51:02 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:42.169 20:51:02 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:42.169 20:51:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.169 20:51:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:04:42.169 20:51:02 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:04:42.169 20:51:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.169 20:51:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:42.169 20:51:02 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:42.169 20:51:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.169 20:51:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:42.169 20:51:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:42.169 20:51:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.169 20:51:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:42.169 20:51:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:42.169 20:51:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.169 20:51:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:42.169 20:51:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:42.169 20:51:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.169 20:51:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:42.169 20:51:02 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:42.169 20:51:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.169 20:51:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:42.169 20:51:02 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:42.169 20:51:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:42.169 20:51:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.169 20:51:02 -- setup/devices.sh@196 -- # blocks=() 00:04:42.169 20:51:02 -- setup/devices.sh@196 -- # declare -a blocks 00:04:42.169 20:51:02 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:42.169 20:51:02 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:42.169 20:51:02 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:42.169 20:51:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.169 20:51:02 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:42.169 20:51:02 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:42.169 20:51:02 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:42.169 20:51:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:42.169 20:51:02 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:42.169 20:51:02 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:42.169 20:51:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:42.169 No valid GPT data, bailing 00:04:42.169 20:51:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:42.169 20:51:03 -- scripts/common.sh@393 -- # pt= 00:04:42.169 20:51:03 -- scripts/common.sh@394 -- # return 1 00:04:42.169 20:51:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:42.169 20:51:03 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:42.169 20:51:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:42.169 20:51:03 -- setup/common.sh@80 -- # echo 1073741824 00:04:42.169 20:51:03 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:42.169 20:51:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.169 20:51:03 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:42.169 20:51:03 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:42.169 20:51:03 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:42.169 20:51:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:42.169 20:51:03 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:42.169 20:51:03 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:42.169 20:51:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:42.169 No valid GPT data, bailing 00:04:42.169 20:51:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:42.169 20:51:03 -- scripts/common.sh@393 -- # pt= 00:04:42.169 20:51:03 -- scripts/common.sh@394 -- # return 1 00:04:42.169 20:51:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:42.169 20:51:03 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:42.169 20:51:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:42.169 20:51:03 -- setup/common.sh@80 -- # echo 4294967296 00:04:42.169 20:51:03 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:42.169 20:51:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.169 20:51:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:42.169 20:51:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.169 20:51:03 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:42.169 20:51:03 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:42.169 20:51:03 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:42.169 20:51:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:42.169 20:51:03 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:42.169 20:51:03 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:42.169 20:51:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:42.169 No valid GPT data, bailing 00:04:42.169 20:51:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:42.169 20:51:03 -- scripts/common.sh@393 -- # pt= 00:04:42.169 20:51:03 -- scripts/common.sh@394 -- # return 1 00:04:42.169 20:51:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:42.169 20:51:03 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:42.169 20:51:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:42.169 20:51:03 -- setup/common.sh@80 -- # echo 4294967296 00:04:42.169 20:51:03 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:42.169 20:51:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.169 20:51:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:42.169 20:51:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.169 20:51:03 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:42.169 20:51:03 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:42.169 20:51:03 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:42.169 20:51:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:42.169 20:51:03 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:42.169 20:51:03 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:42.169 20:51:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:42.169 No valid GPT data, bailing 00:04:42.429 20:51:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:42.429 20:51:03 -- scripts/common.sh@393 -- # pt= 00:04:42.429 20:51:03 -- scripts/common.sh@394 -- # return 1 00:04:42.429 20:51:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:42.429 20:51:03 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:42.429 20:51:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:42.429 20:51:03 -- setup/common.sh@80 -- # echo 4294967296 00:04:42.429 20:51:03 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:42.429 20:51:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.429 20:51:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:42.429 20:51:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.429 20:51:03 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:42.429 20:51:03 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:42.429 20:51:03 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:42.429 20:51:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:42.429 20:51:03 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:42.429 20:51:03 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:42.429 20:51:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:42.429 No valid GPT data, bailing 00:04:42.429 20:51:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:42.429 20:51:03 -- scripts/common.sh@393 -- # pt= 00:04:42.429 20:51:03 -- scripts/common.sh@394 -- # return 1 00:04:42.429 20:51:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:42.429 20:51:03 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:42.429 20:51:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:42.429 20:51:03 -- setup/common.sh@80 -- # echo 6343335936 00:04:42.429 20:51:03 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:42.429 20:51:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.429 20:51:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:42.429 20:51:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.429 20:51:03 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:42.429 20:51:03 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:42.429 20:51:03 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:42.429 20:51:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:42.429 20:51:03 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:42.429 20:51:03 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:42.429 20:51:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:42.429 No valid GPT data, bailing 00:04:42.429 20:51:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:42.429 20:51:03 -- scripts/common.sh@393 -- # pt= 00:04:42.429 20:51:03 -- scripts/common.sh@394 -- # return 1 00:04:42.429 20:51:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:42.429 20:51:03 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:42.429 20:51:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:42.429 20:51:03 -- setup/common.sh@80 -- # echo 5368709120 00:04:42.429 20:51:03 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:42.429 20:51:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.429 20:51:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:42.429 20:51:03 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:42.429 20:51:03 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:42.429 20:51:03 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:42.429 20:51:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.429 20:51:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.429 20:51:03 -- common/autotest_common.sh@10 -- # set +x 00:04:42.429 ************************************ 00:04:42.429 START TEST nvme_mount 00:04:42.429 ************************************ 00:04:42.430 20:51:03 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:42.430 20:51:03 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:42.430 20:51:03 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:42.430 20:51:03 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.430 20:51:03 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:42.430 20:51:03 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:42.430 20:51:03 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:42.430 20:51:03 -- setup/common.sh@40 -- # local part_no=1 00:04:42.430 20:51:03 -- setup/common.sh@41 -- # local size=1073741824 00:04:42.430 20:51:03 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.430 20:51:03 -- setup/common.sh@44 -- # parts=() 00:04:42.430 20:51:03 -- setup/common.sh@44 -- # local parts 00:04:42.430 20:51:03 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.430 20:51:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.430 20:51:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.430 20:51:03 -- setup/common.sh@46 -- # (( part++ )) 00:04:42.430 20:51:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.430 20:51:03 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:42.430 20:51:03 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:42.430 20:51:03 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:43.809 Creating new GPT entries in memory. 00:04:43.809 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:43.809 other utilities. 00:04:43.809 20:51:04 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:43.809 20:51:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.809 20:51:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:43.809 20:51:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:43.809 20:51:04 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:44.762 Creating new GPT entries in memory. 00:04:44.762 The operation has completed successfully. 00:04:44.762 20:51:05 -- setup/common.sh@57 -- # (( part++ )) 00:04:44.762 20:51:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.762 20:51:05 -- setup/common.sh@62 -- # wait 53995 00:04:44.762 20:51:05 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.762 20:51:05 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:44.762 20:51:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.762 20:51:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:44.762 20:51:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:44.762 20:51:05 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.762 20:51:05 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:44.762 20:51:05 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:44.762 20:51:05 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:44.762 20:51:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.762 20:51:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:44.762 20:51:05 -- setup/devices.sh@53 -- # local found=0 00:04:44.762 20:51:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.762 20:51:05 -- setup/devices.sh@56 -- # : 00:04:44.762 20:51:05 -- setup/devices.sh@59 -- # local pci status 00:04:44.762 20:51:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.762 20:51:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:44.762 20:51:05 -- setup/devices.sh@47 -- # setup output config 00:04:44.762 20:51:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.763 20:51:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:44.763 20:51:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:44.763 20:51:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.763 20:51:05 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:44.763 20:51:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.021 20:51:06 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:45.021 20:51:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:45.021 20:51:06 -- setup/devices.sh@63 -- # found=1 00:04:45.021 20:51:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.021 20:51:06 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:45.021 20:51:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.280 20:51:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:45.280 20:51:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.280 20:51:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:45.280 20:51:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.539 20:51:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.539 20:51:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:45.539 20:51:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.539 20:51:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.539 20:51:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:45.539 20:51:06 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:45.539 20:51:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.539 20:51:06 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.539 20:51:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:45.539 20:51:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:45.539 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:45.539 20:51:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:45.539 20:51:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:45.798 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:45.798 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:45.798 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:45.798 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:45.798 20:51:06 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:45.798 20:51:06 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:45.798 20:51:06 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.798 20:51:06 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:45.798 20:51:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:45.798 20:51:06 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.798 20:51:06 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:45.798 20:51:06 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:45.798 20:51:06 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:45.798 20:51:06 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.798 20:51:06 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:45.798 20:51:06 -- setup/devices.sh@53 -- # local found=0 00:04:45.798 20:51:06 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.798 20:51:06 -- setup/devices.sh@56 -- # : 00:04:45.798 20:51:06 -- setup/devices.sh@59 -- # local pci status 00:04:45.798 20:51:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.798 20:51:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:45.798 20:51:06 -- setup/devices.sh@47 -- # setup output config 00:04:45.798 20:51:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.798 20:51:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.798 20:51:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:45.798 20:51:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.056 20:51:07 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:46.057 20:51:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.315 20:51:07 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:46.315 20:51:07 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:46.315 20:51:07 -- setup/devices.sh@63 -- # found=1 00:04:46.315 20:51:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.315 20:51:07 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:46.315 20:51:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.574 20:51:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:46.574 20:51:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.574 20:51:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:46.574 20:51:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.574 20:51:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.574 20:51:07 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:46.574 20:51:07 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.574 20:51:07 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.574 20:51:07 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.574 20:51:07 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.574 20:51:07 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:46.575 20:51:07 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:46.575 20:51:07 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:46.575 20:51:07 -- setup/devices.sh@50 -- # local mount_point= 00:04:46.575 20:51:07 -- setup/devices.sh@51 -- # local test_file= 00:04:46.575 20:51:07 -- setup/devices.sh@53 -- # local found=0 00:04:46.575 20:51:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.575 20:51:07 -- setup/devices.sh@59 -- # local pci status 00:04:46.575 20:51:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.575 20:51:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:46.575 20:51:07 -- setup/devices.sh@47 -- # setup output config 00:04:46.575 20:51:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.575 20:51:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.834 20:51:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:46.834 20:51:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.093 20:51:07 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:47.093 20:51:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.352 20:51:08 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:47.352 20:51:08 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:47.352 20:51:08 -- setup/devices.sh@63 -- # found=1 00:04:47.352 20:51:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.352 20:51:08 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:47.352 20:51:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.352 20:51:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:47.352 20:51:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.612 20:51:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:47.612 20:51:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.612 20:51:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.612 20:51:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.612 20:51:08 -- setup/devices.sh@68 -- # return 0 00:04:47.612 20:51:08 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:47.612 20:51:08 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.612 20:51:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:47.612 20:51:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:47.612 20:51:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:47.612 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.612 00:04:47.612 real 0m5.167s 00:04:47.612 user 0m1.272s 00:04:47.612 sys 0m1.595s 00:04:47.612 20:51:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.612 20:51:08 -- common/autotest_common.sh@10 -- # set +x 00:04:47.612 ************************************ 00:04:47.612 END TEST nvme_mount 00:04:47.612 ************************************ 00:04:47.612 20:51:08 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:47.612 20:51:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.612 20:51:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.612 20:51:08 -- common/autotest_common.sh@10 -- # set +x 00:04:47.612 ************************************ 00:04:47.612 START TEST dm_mount 00:04:47.612 ************************************ 00:04:47.612 20:51:08 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:47.612 20:51:08 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:47.612 20:51:08 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:47.612 20:51:08 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:47.612 20:51:08 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:47.612 20:51:08 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:47.612 20:51:08 -- setup/common.sh@40 -- # local part_no=2 00:04:47.612 20:51:08 -- setup/common.sh@41 -- # local size=1073741824 00:04:47.612 20:51:08 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:47.612 20:51:08 -- setup/common.sh@44 -- # parts=() 00:04:47.612 20:51:08 -- setup/common.sh@44 -- # local parts 00:04:47.612 20:51:08 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:47.612 20:51:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.612 20:51:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.612 20:51:08 -- setup/common.sh@46 -- # (( part++ )) 00:04:47.612 20:51:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.612 20:51:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.612 20:51:08 -- setup/common.sh@46 -- # (( part++ )) 00:04:47.612 20:51:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.612 20:51:08 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:47.612 20:51:08 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:47.612 20:51:08 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:48.993 Creating new GPT entries in memory. 00:04:48.993 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.993 other utilities. 00:04:48.993 20:51:09 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.993 20:51:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.993 20:51:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.993 20:51:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.993 20:51:09 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:49.930 Creating new GPT entries in memory. 00:04:49.930 The operation has completed successfully. 00:04:49.930 20:51:10 -- setup/common.sh@57 -- # (( part++ )) 00:04:49.930 20:51:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.930 20:51:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:49.930 20:51:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:49.930 20:51:10 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:50.863 The operation has completed successfully. 00:04:50.864 20:51:11 -- setup/common.sh@57 -- # (( part++ )) 00:04:50.864 20:51:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.864 20:51:11 -- setup/common.sh@62 -- # wait 54623 00:04:50.864 20:51:11 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:50.864 20:51:11 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.864 20:51:11 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:50.864 20:51:11 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:50.864 20:51:11 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:50.864 20:51:11 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.864 20:51:11 -- setup/devices.sh@161 -- # break 00:04:50.864 20:51:11 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.864 20:51:11 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:50.864 20:51:11 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:50.864 20:51:11 -- setup/devices.sh@166 -- # dm=dm-0 00:04:50.864 20:51:11 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:50.864 20:51:11 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:50.864 20:51:11 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.864 20:51:11 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:50.864 20:51:11 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.864 20:51:11 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.864 20:51:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:50.864 20:51:11 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.864 20:51:11 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:50.864 20:51:11 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:50.864 20:51:11 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:50.864 20:51:11 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.864 20:51:11 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:50.864 20:51:11 -- setup/devices.sh@53 -- # local found=0 00:04:50.864 20:51:11 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:50.864 20:51:11 -- setup/devices.sh@56 -- # : 00:04:50.864 20:51:11 -- setup/devices.sh@59 -- # local pci status 00:04:50.864 20:51:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.864 20:51:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:50.864 20:51:11 -- setup/devices.sh@47 -- # setup output config 00:04:50.864 20:51:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.864 20:51:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.123 20:51:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:51.123 20:51:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.123 20:51:12 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:51.123 20:51:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.382 20:51:12 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:51.382 20:51:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:51.382 20:51:12 -- setup/devices.sh@63 -- # found=1 00:04:51.382 20:51:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.382 20:51:12 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:51.382 20:51:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.640 20:51:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:51.640 20:51:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.640 20:51:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:51.640 20:51:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.898 20:51:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.898 20:51:12 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:51.898 20:51:12 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:51.898 20:51:12 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:51.898 20:51:12 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:51.898 20:51:12 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:51.899 20:51:12 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:51.899 20:51:12 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:51.899 20:51:12 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:51.899 20:51:12 -- setup/devices.sh@50 -- # local mount_point= 00:04:51.899 20:51:12 -- setup/devices.sh@51 -- # local test_file= 00:04:51.899 20:51:12 -- setup/devices.sh@53 -- # local found=0 00:04:51.899 20:51:12 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:51.899 20:51:12 -- setup/devices.sh@59 -- # local pci status 00:04:51.899 20:51:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.899 20:51:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:51.899 20:51:12 -- setup/devices.sh@47 -- # setup output config 00:04:51.899 20:51:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.899 20:51:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.899 20:51:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:51.899 20:51:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.156 20:51:12 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.156 20:51:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.415 20:51:13 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.415 20:51:13 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:52.415 20:51:13 -- setup/devices.sh@63 -- # found=1 00:04:52.415 20:51:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.415 20:51:13 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.415 20:51:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.415 20:51:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.415 20:51:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.673 20:51:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.673 20:51:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.673 20:51:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.673 20:51:13 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:52.673 20:51:13 -- setup/devices.sh@68 -- # return 0 00:04:52.673 20:51:13 -- setup/devices.sh@187 -- # cleanup_dm 00:04:52.673 20:51:13 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:52.673 20:51:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.673 20:51:13 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:52.673 20:51:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:52.673 20:51:13 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:52.673 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.673 20:51:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:52.673 20:51:13 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:52.673 00:04:52.673 real 0m5.039s 00:04:52.673 user 0m0.872s 00:04:52.673 sys 0m1.099s 00:04:52.673 20:51:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.673 ************************************ 00:04:52.673 20:51:13 -- common/autotest_common.sh@10 -- # set +x 00:04:52.673 END TEST dm_mount 00:04:52.673 ************************************ 00:04:52.673 20:51:13 -- setup/devices.sh@1 -- # cleanup 00:04:52.673 20:51:13 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:52.673 20:51:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.673 20:51:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:52.673 20:51:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:52.932 20:51:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:52.932 20:51:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:53.190 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.190 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.190 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.190 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:53.190 20:51:13 -- setup/devices.sh@12 -- # cleanup_dm 00:04:53.190 20:51:13 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.190 20:51:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.190 20:51:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:53.190 20:51:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:53.190 20:51:13 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:53.190 20:51:13 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:53.190 00:04:53.190 real 0m12.415s 00:04:53.190 user 0m3.189s 00:04:53.190 sys 0m3.557s 00:04:53.190 20:51:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.190 ************************************ 00:04:53.190 END TEST devices 00:04:53.190 ************************************ 00:04:53.190 20:51:14 -- common/autotest_common.sh@10 -- # set +x 00:04:53.190 00:04:53.190 real 0m44.313s 00:04:53.190 user 0m10.635s 00:04:53.190 sys 0m13.603s 00:04:53.190 20:51:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.190 20:51:14 -- common/autotest_common.sh@10 -- # set +x 00:04:53.190 ************************************ 00:04:53.190 END TEST setup.sh 00:04:53.190 ************************************ 00:04:53.190 20:51:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:53.448 Hugepages 00:04:53.448 node hugesize free / total 00:04:53.448 node0 1048576kB 0 / 0 00:04:53.448 node0 2048kB 2048 / 2048 00:04:53.448 00:04:53.448 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.448 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:53.448 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:53.707 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:53.707 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:53.707 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:53.707 20:51:14 -- spdk/autotest.sh@128 -- # uname -s 00:04:53.707 20:51:14 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:53.707 20:51:14 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:53.707 20:51:14 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.960 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.960 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.960 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.960 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.960 20:51:15 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:56.335 20:51:16 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:56.335 20:51:16 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:56.336 20:51:16 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:56.336 20:51:16 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:56.336 20:51:16 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:56.336 20:51:16 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:56.336 20:51:16 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:56.336 20:51:16 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:56.336 20:51:16 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:56.336 20:51:17 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:56.336 20:51:17 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:56.336 20:51:17 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.852 Waiting for block devices as requested 00:04:56.852 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:56.852 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:56.852 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:57.110 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:02.383 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:05:02.383 20:51:23 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:02.383 20:51:23 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:02.383 20:51:23 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:02.383 20:51:23 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:02.383 20:51:23 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:02.383 20:51:23 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:05:02.383 20:51:23 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:02.383 20:51:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:05:02.383 20:51:23 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:05:02.383 20:51:23 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:05:02.383 20:51:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:02.383 20:51:23 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:02.383 20:51:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:02.383 20:51:23 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:02.383 20:51:23 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:02.383 20:51:23 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:02.383 20:51:23 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:05:02.383 20:51:23 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:02.383 20:51:23 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:02.383 20:51:23 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:02.383 20:51:23 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1552 -- # continue 00:05:02.384 20:51:23 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:02.384 20:51:23 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:02.384 20:51:23 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:02.384 20:51:23 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:02.384 20:51:23 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:02.384 20:51:23 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:02.384 20:51:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:05:02.384 20:51:23 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:05:02.384 20:51:23 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:02.384 20:51:23 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:02.384 20:51:23 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:02.384 20:51:23 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1552 -- # continue 00:05:02.384 20:51:23 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:02.384 20:51:23 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:05:02.384 20:51:23 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:02.384 20:51:23 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:05:02.384 20:51:23 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:02.384 20:51:23 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:02.384 20:51:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:02.384 20:51:23 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:02.384 20:51:23 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:02.384 20:51:23 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:02.384 20:51:23 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:02.384 20:51:23 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1552 -- # continue 00:05:02.384 20:51:23 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:02.384 20:51:23 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:05:02.384 20:51:23 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:02.384 20:51:23 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:05:02.384 20:51:23 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:02.384 20:51:23 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:02.384 20:51:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:02.384 20:51:23 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:02.384 20:51:23 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:02.384 20:51:23 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:02.384 20:51:23 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:02.384 20:51:23 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:02.384 20:51:23 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:02.384 20:51:23 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:02.384 20:51:23 -- common/autotest_common.sh@1552 -- # continue 00:05:02.384 20:51:23 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:02.384 20:51:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.384 20:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:02.384 20:51:23 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:02.384 20:51:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.384 20:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:02.384 20:51:23 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.580 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.580 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.580 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.580 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.580 20:51:24 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:03.580 20:51:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.580 20:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:03.838 20:51:24 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:03.838 20:51:24 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:03.838 20:51:24 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:03.838 20:51:24 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:03.838 20:51:24 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:03.838 20:51:24 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:03.838 20:51:24 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:03.838 20:51:24 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:03.838 20:51:24 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.838 20:51:24 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.838 20:51:24 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:03.838 20:51:24 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:05:03.838 20:51:24 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:03.838 20:51:24 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:03.838 20:51:24 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:03.838 20:51:24 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:03.838 20:51:24 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.838 20:51:24 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:03.838 20:51:24 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:03.838 20:51:24 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:03.838 20:51:24 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.838 20:51:24 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:03.838 20:51:24 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:05:03.838 20:51:24 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:03.838 20:51:24 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.838 20:51:24 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:03.838 20:51:24 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:05:03.838 20:51:24 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:03.838 20:51:24 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.838 20:51:24 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:03.838 20:51:24 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:03.838 20:51:24 -- common/autotest_common.sh@1588 -- # return 0 00:05:03.838 20:51:24 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:03.838 20:51:24 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:03.838 20:51:24 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:03.838 20:51:24 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:03.838 20:51:24 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:03.838 20:51:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.838 20:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:03.838 20:51:24 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:03.838 20:51:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.838 20:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.838 20:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:03.838 ************************************ 00:05:03.838 START TEST env 00:05:03.838 ************************************ 00:05:03.838 20:51:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:03.838 * Looking for test storage... 00:05:03.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:03.838 20:51:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:03.838 20:51:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:03.838 20:51:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:04.096 20:51:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:04.096 20:51:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:04.096 20:51:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:04.096 20:51:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:04.096 20:51:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:04.096 20:51:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:04.096 20:51:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.096 20:51:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:04.096 20:51:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:04.096 20:51:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:04.096 20:51:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:04.096 20:51:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:04.096 20:51:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:04.096 20:51:24 -- scripts/common.sh@344 -- # : 1 00:05:04.096 20:51:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:04.096 20:51:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.096 20:51:24 -- scripts/common.sh@364 -- # decimal 1 00:05:04.096 20:51:24 -- scripts/common.sh@352 -- # local d=1 00:05:04.096 20:51:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.096 20:51:24 -- scripts/common.sh@354 -- # echo 1 00:05:04.096 20:51:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:04.096 20:51:24 -- scripts/common.sh@365 -- # decimal 2 00:05:04.096 20:51:24 -- scripts/common.sh@352 -- # local d=2 00:05:04.096 20:51:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.096 20:51:24 -- scripts/common.sh@354 -- # echo 2 00:05:04.096 20:51:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:04.097 20:51:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:04.097 20:51:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:04.097 20:51:24 -- scripts/common.sh@367 -- # return 0 00:05:04.097 20:51:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.097 20:51:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.097 --rc genhtml_branch_coverage=1 00:05:04.097 --rc genhtml_function_coverage=1 00:05:04.097 --rc genhtml_legend=1 00:05:04.097 --rc geninfo_all_blocks=1 00:05:04.097 --rc geninfo_unexecuted_blocks=1 00:05:04.097 00:05:04.097 ' 00:05:04.097 20:51:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.097 --rc genhtml_branch_coverage=1 00:05:04.097 --rc genhtml_function_coverage=1 00:05:04.097 --rc genhtml_legend=1 00:05:04.097 --rc geninfo_all_blocks=1 00:05:04.097 --rc geninfo_unexecuted_blocks=1 00:05:04.097 00:05:04.097 ' 00:05:04.097 20:51:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.097 --rc genhtml_branch_coverage=1 00:05:04.097 --rc genhtml_function_coverage=1 00:05:04.097 --rc genhtml_legend=1 00:05:04.097 --rc geninfo_all_blocks=1 00:05:04.097 --rc geninfo_unexecuted_blocks=1 00:05:04.097 00:05:04.097 ' 00:05:04.097 20:51:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.097 --rc genhtml_branch_coverage=1 00:05:04.097 --rc genhtml_function_coverage=1 00:05:04.097 --rc genhtml_legend=1 00:05:04.097 --rc geninfo_all_blocks=1 00:05:04.097 --rc geninfo_unexecuted_blocks=1 00:05:04.097 00:05:04.097 ' 00:05:04.097 20:51:24 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:04.097 20:51:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.097 20:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.097 20:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:04.097 ************************************ 00:05:04.097 START TEST env_memory 00:05:04.097 ************************************ 00:05:04.097 20:51:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:04.097 00:05:04.097 00:05:04.097 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.097 http://cunit.sourceforge.net/ 00:05:04.097 00:05:04.097 00:05:04.097 Suite: memory 00:05:04.097 Test: alloc and free memory map ...[2024-12-08 20:51:25.055727] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:04.097 passed 00:05:04.097 Test: mem map translation ...[2024-12-08 20:51:25.114956] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:04.097 [2024-12-08 20:51:25.115217] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:04.097 [2024-12-08 20:51:25.115490] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:04.097 [2024-12-08 20:51:25.115685] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:04.355 passed 00:05:04.355 Test: mem map registration ...[2024-12-08 20:51:25.212632] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:04.355 [2024-12-08 20:51:25.212874] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:04.355 passed 00:05:04.355 Test: mem map adjacent registrations ...passed 00:05:04.355 00:05:04.355 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.355 suites 1 1 n/a 0 0 00:05:04.355 tests 4 4 4 0 0 00:05:04.355 asserts 152 152 152 0 n/a 00:05:04.355 00:05:04.355 Elapsed time = 0.337 seconds 00:05:04.355 00:05:04.355 real 0m0.380s 00:05:04.355 user 0m0.351s 00:05:04.355 sys 0m0.018s 00:05:04.355 20:51:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.355 20:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:04.355 ************************************ 00:05:04.355 END TEST env_memory 00:05:04.355 ************************************ 00:05:04.613 20:51:25 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:04.613 20:51:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.613 20:51:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.613 20:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:04.613 ************************************ 00:05:04.613 START TEST env_vtophys 00:05:04.613 ************************************ 00:05:04.613 20:51:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:04.613 EAL: lib.eal log level changed from notice to debug 00:05:04.613 EAL: Detected lcore 0 as core 0 on socket 0 00:05:04.613 EAL: Detected lcore 1 as core 0 on socket 0 00:05:04.613 EAL: Detected lcore 2 as core 0 on socket 0 00:05:04.613 EAL: Detected lcore 3 as core 0 on socket 0 00:05:04.613 EAL: Detected lcore 4 as core 0 on socket 0 00:05:04.613 EAL: Detected lcore 5 as core 0 on socket 0 00:05:04.613 EAL: Detected lcore 6 as core 0 on socket 0 00:05:04.613 EAL: Detected lcore 7 as core 0 on socket 0 00:05:04.613 EAL: Detected lcore 8 as core 0 on socket 0 00:05:04.613 EAL: Detected lcore 9 as core 0 on socket 0 00:05:04.613 EAL: Maximum logical cores by configuration: 128 00:05:04.613 EAL: Detected CPU lcores: 10 00:05:04.613 EAL: Detected NUMA nodes: 1 00:05:04.613 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:04.613 EAL: Detected shared linkage of DPDK 00:05:04.613 EAL: No shared files mode enabled, IPC will be disabled 00:05:04.613 EAL: Selected IOVA mode 'PA' 00:05:04.613 EAL: Probing VFIO support... 00:05:04.613 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:04.613 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:04.613 EAL: Ask a virtual area of 0x2e000 bytes 00:05:04.613 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:04.613 EAL: Setting up physically contiguous memory... 00:05:04.613 EAL: Setting maximum number of open files to 524288 00:05:04.613 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:04.613 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:04.613 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.613 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:04.613 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.613 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.613 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:04.613 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:04.613 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.613 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:04.613 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.613 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.613 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:04.613 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:04.614 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.614 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:04.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.614 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.614 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:04.614 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:04.614 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.614 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:04.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.614 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.614 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:04.614 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:04.614 EAL: Hugepages will be freed exactly as allocated. 00:05:04.614 EAL: No shared files mode enabled, IPC is disabled 00:05:04.614 EAL: No shared files mode enabled, IPC is disabled 00:05:04.614 EAL: TSC frequency is ~2200000 KHz 00:05:04.614 EAL: Main lcore 0 is ready (tid=7f518dc51a40;cpuset=[0]) 00:05:04.614 EAL: Trying to obtain current memory policy. 00:05:04.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.614 EAL: Restoring previous memory policy: 0 00:05:04.614 EAL: request: mp_malloc_sync 00:05:04.614 EAL: No shared files mode enabled, IPC is disabled 00:05:04.614 EAL: Heap on socket 0 was expanded by 2MB 00:05:04.614 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:04.614 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:04.614 EAL: Mem event callback 'spdk:(nil)' registered 00:05:04.614 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:04.614 00:05:04.614 00:05:04.614 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.614 http://cunit.sourceforge.net/ 00:05:04.614 00:05:04.614 00:05:04.614 Suite: components_suite 00:05:05.181 Test: vtophys_malloc_test ...passed 00:05:05.181 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:05.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.181 EAL: Restoring previous memory policy: 4 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was expanded by 4MB 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was shrunk by 4MB 00:05:05.181 EAL: Trying to obtain current memory policy. 00:05:05.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.181 EAL: Restoring previous memory policy: 4 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was expanded by 6MB 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was shrunk by 6MB 00:05:05.181 EAL: Trying to obtain current memory policy. 00:05:05.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.181 EAL: Restoring previous memory policy: 4 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was expanded by 10MB 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was shrunk by 10MB 00:05:05.181 EAL: Trying to obtain current memory policy. 00:05:05.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.181 EAL: Restoring previous memory policy: 4 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was expanded by 18MB 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was shrunk by 18MB 00:05:05.181 EAL: Trying to obtain current memory policy. 00:05:05.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.181 EAL: Restoring previous memory policy: 4 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was expanded by 34MB 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was shrunk by 34MB 00:05:05.181 EAL: Trying to obtain current memory policy. 00:05:05.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.181 EAL: Restoring previous memory policy: 4 00:05:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.181 EAL: request: mp_malloc_sync 00:05:05.181 EAL: No shared files mode enabled, IPC is disabled 00:05:05.181 EAL: Heap on socket 0 was expanded by 66MB 00:05:05.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.440 EAL: request: mp_malloc_sync 00:05:05.440 EAL: No shared files mode enabled, IPC is disabled 00:05:05.440 EAL: Heap on socket 0 was shrunk by 66MB 00:05:05.440 EAL: Trying to obtain current memory policy. 00:05:05.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.440 EAL: Restoring previous memory policy: 4 00:05:05.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.440 EAL: request: mp_malloc_sync 00:05:05.440 EAL: No shared files mode enabled, IPC is disabled 00:05:05.440 EAL: Heap on socket 0 was expanded by 130MB 00:05:05.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.716 EAL: request: mp_malloc_sync 00:05:05.716 EAL: No shared files mode enabled, IPC is disabled 00:05:05.716 EAL: Heap on socket 0 was shrunk by 130MB 00:05:05.716 EAL: Trying to obtain current memory policy. 00:05:05.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.716 EAL: Restoring previous memory policy: 4 00:05:05.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.716 EAL: request: mp_malloc_sync 00:05:05.716 EAL: No shared files mode enabled, IPC is disabled 00:05:05.716 EAL: Heap on socket 0 was expanded by 258MB 00:05:05.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.974 EAL: request: mp_malloc_sync 00:05:05.974 EAL: No shared files mode enabled, IPC is disabled 00:05:05.974 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.232 EAL: Trying to obtain current memory policy. 00:05:06.232 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.490 EAL: Restoring previous memory policy: 4 00:05:06.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.490 EAL: request: mp_malloc_sync 00:05:06.490 EAL: No shared files mode enabled, IPC is disabled 00:05:06.490 EAL: Heap on socket 0 was expanded by 514MB 00:05:07.057 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.057 EAL: request: mp_malloc_sync 00:05:07.057 EAL: No shared files mode enabled, IPC is disabled 00:05:07.057 EAL: Heap on socket 0 was shrunk by 514MB 00:05:07.623 EAL: Trying to obtain current memory policy. 00:05:07.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.883 EAL: Restoring previous memory policy: 4 00:05:07.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.883 EAL: request: mp_malloc_sync 00:05:07.883 EAL: No shared files mode enabled, IPC is disabled 00:05:07.883 EAL: Heap on socket 0 was expanded by 1026MB 00:05:09.262 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.262 EAL: request: mp_malloc_sync 00:05:09.262 EAL: No shared files mode enabled, IPC is disabled 00:05:09.262 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:10.200 passed 00:05:10.200 00:05:10.200 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.200 suites 1 1 n/a 0 0 00:05:10.201 tests 2 2 2 0 0 00:05:10.201 asserts 5439 5439 5439 0 n/a 00:05:10.201 00:05:10.201 Elapsed time = 5.388 seconds 00:05:10.201 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.201 EAL: request: mp_malloc_sync 00:05:10.201 EAL: No shared files mode enabled, IPC is disabled 00:05:10.201 EAL: Heap on socket 0 was shrunk by 2MB 00:05:10.201 EAL: No shared files mode enabled, IPC is disabled 00:05:10.201 EAL: No shared files mode enabled, IPC is disabled 00:05:10.201 EAL: No shared files mode enabled, IPC is disabled 00:05:10.201 00:05:10.201 real 0m5.699s 00:05:10.201 user 0m4.902s 00:05:10.201 sys 0m0.641s 00:05:10.201 ************************************ 00:05:10.201 END TEST env_vtophys 00:05:10.201 ************************************ 00:05:10.201 20:51:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.201 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.201 20:51:31 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:10.201 20:51:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.201 20:51:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.201 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.201 ************************************ 00:05:10.201 START TEST env_pci 00:05:10.201 ************************************ 00:05:10.201 20:51:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:10.201 00:05:10.201 00:05:10.201 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.201 http://cunit.sourceforge.net/ 00:05:10.201 00:05:10.201 00:05:10.201 Suite: pci 00:05:10.201 Test: pci_hook ...[2024-12-08 20:51:31.193725] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56352 has claimed it 00:05:10.201 passed 00:05:10.201 00:05:10.201 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.201 suites 1 1 n/a 0 0 00:05:10.201 tests 1 1 1 0 0 00:05:10.201 asserts 25 25 25 0 n/a 00:05:10.201 00:05:10.201 Elapsed time = 0.005 seconds 00:05:10.201 EAL: Cannot find device (10000:00:01.0) 00:05:10.201 EAL: Failed to attach device on primary process 00:05:10.201 00:05:10.201 real 0m0.070s 00:05:10.201 user 0m0.035s 00:05:10.201 sys 0m0.034s 00:05:10.201 20:51:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.201 ************************************ 00:05:10.201 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.201 END TEST env_pci 00:05:10.201 ************************************ 00:05:10.460 20:51:31 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:10.460 20:51:31 -- env/env.sh@15 -- # uname 00:05:10.460 20:51:31 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:10.460 20:51:31 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:10.460 20:51:31 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.460 20:51:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:10.460 20:51:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.460 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.460 ************************************ 00:05:10.460 START TEST env_dpdk_post_init 00:05:10.460 ************************************ 00:05:10.460 20:51:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.460 EAL: Detected CPU lcores: 10 00:05:10.460 EAL: Detected NUMA nodes: 1 00:05:10.460 EAL: Detected shared linkage of DPDK 00:05:10.460 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.460 EAL: Selected IOVA mode 'PA' 00:05:10.460 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.720 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:10.720 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:10.720 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:05:10.720 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:05:10.720 Starting DPDK initialization... 00:05:10.720 Starting SPDK post initialization... 00:05:10.720 SPDK NVMe probe 00:05:10.720 Attaching to 0000:00:06.0 00:05:10.720 Attaching to 0000:00:07.0 00:05:10.720 Attaching to 0000:00:08.0 00:05:10.720 Attaching to 0000:00:09.0 00:05:10.720 Attached to 0000:00:06.0 00:05:10.720 Attached to 0000:00:07.0 00:05:10.720 Attached to 0000:00:09.0 00:05:10.720 Attached to 0000:00:08.0 00:05:10.720 Cleaning up... 00:05:10.720 00:05:10.720 real 0m0.278s 00:05:10.720 user 0m0.101s 00:05:10.720 sys 0m0.078s 00:05:10.720 20:51:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.720 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.720 ************************************ 00:05:10.720 END TEST env_dpdk_post_init 00:05:10.720 ************************************ 00:05:10.720 20:51:31 -- env/env.sh@26 -- # uname 00:05:10.720 20:51:31 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:10.720 20:51:31 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.720 20:51:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.720 20:51:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.720 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.720 ************************************ 00:05:10.720 START TEST env_mem_callbacks 00:05:10.720 ************************************ 00:05:10.720 20:51:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.720 EAL: Detected CPU lcores: 10 00:05:10.720 EAL: Detected NUMA nodes: 1 00:05:10.720 EAL: Detected shared linkage of DPDK 00:05:10.720 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.720 EAL: Selected IOVA mode 'PA' 00:05:10.980 00:05:10.980 00:05:10.980 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.980 http://cunit.sourceforge.net/ 00:05:10.980 00:05:10.980 00:05:10.980 Suite: memory 00:05:10.980 Test: test ... 00:05:10.980 register 0x200000200000 2097152 00:05:10.980 malloc 3145728 00:05:10.980 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.980 register 0x200000400000 4194304 00:05:10.980 buf 0x2000004fffc0 len 3145728 PASSED 00:05:10.980 malloc 64 00:05:10.980 buf 0x2000004ffec0 len 64 PASSED 00:05:10.980 malloc 4194304 00:05:10.980 register 0x200000800000 6291456 00:05:10.980 buf 0x2000009fffc0 len 4194304 PASSED 00:05:10.980 free 0x2000004fffc0 3145728 00:05:10.980 free 0x2000004ffec0 64 00:05:10.980 unregister 0x200000400000 4194304 PASSED 00:05:10.980 free 0x2000009fffc0 4194304 00:05:10.980 unregister 0x200000800000 6291456 PASSED 00:05:10.980 malloc 8388608 00:05:10.980 register 0x200000400000 10485760 00:05:10.980 buf 0x2000005fffc0 len 8388608 PASSED 00:05:10.980 free 0x2000005fffc0 8388608 00:05:10.980 unregister 0x200000400000 10485760 PASSED 00:05:10.980 passed 00:05:10.980 00:05:10.980 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.980 suites 1 1 n/a 0 0 00:05:10.980 tests 1 1 1 0 0 00:05:10.980 asserts 15 15 15 0 n/a 00:05:10.980 00:05:10.980 Elapsed time = 0.048 seconds 00:05:10.980 00:05:10.980 real 0m0.248s 00:05:10.980 user 0m0.083s 00:05:10.980 sys 0m0.061s 00:05:10.980 20:51:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.980 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.980 ************************************ 00:05:10.980 END TEST env_mem_callbacks 00:05:10.980 ************************************ 00:05:10.980 00:05:10.980 real 0m7.143s 00:05:10.980 user 0m5.680s 00:05:10.980 sys 0m1.067s 00:05:10.980 20:51:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.980 ************************************ 00:05:10.980 END TEST env 00:05:10.980 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.980 ************************************ 00:05:10.980 20:51:31 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:10.980 20:51:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.980 20:51:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.980 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.980 ************************************ 00:05:10.980 START TEST rpc 00:05:10.980 ************************************ 00:05:10.980 20:51:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.240 * Looking for test storage... 00:05:11.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.240 20:51:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.240 20:51:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.240 20:51:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.240 20:51:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.240 20:51:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.240 20:51:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.240 20:51:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.240 20:51:32 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.240 20:51:32 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.240 20:51:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.240 20:51:32 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.240 20:51:32 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.240 20:51:32 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.240 20:51:32 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.240 20:51:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.240 20:51:32 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.240 20:51:32 -- scripts/common.sh@344 -- # : 1 00:05:11.240 20:51:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.240 20:51:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.240 20:51:32 -- scripts/common.sh@364 -- # decimal 1 00:05:11.240 20:51:32 -- scripts/common.sh@352 -- # local d=1 00:05:11.240 20:51:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.240 20:51:32 -- scripts/common.sh@354 -- # echo 1 00:05:11.240 20:51:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:11.240 20:51:32 -- scripts/common.sh@365 -- # decimal 2 00:05:11.240 20:51:32 -- scripts/common.sh@352 -- # local d=2 00:05:11.240 20:51:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.240 20:51:32 -- scripts/common.sh@354 -- # echo 2 00:05:11.240 20:51:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:11.240 20:51:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:11.240 20:51:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:11.240 20:51:32 -- scripts/common.sh@367 -- # return 0 00:05:11.240 20:51:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.240 20:51:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:11.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.240 --rc genhtml_branch_coverage=1 00:05:11.240 --rc genhtml_function_coverage=1 00:05:11.240 --rc genhtml_legend=1 00:05:11.240 --rc geninfo_all_blocks=1 00:05:11.240 --rc geninfo_unexecuted_blocks=1 00:05:11.240 00:05:11.240 ' 00:05:11.240 20:51:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:11.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.240 --rc genhtml_branch_coverage=1 00:05:11.240 --rc genhtml_function_coverage=1 00:05:11.240 --rc genhtml_legend=1 00:05:11.240 --rc geninfo_all_blocks=1 00:05:11.240 --rc geninfo_unexecuted_blocks=1 00:05:11.240 00:05:11.240 ' 00:05:11.240 20:51:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:11.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.240 --rc genhtml_branch_coverage=1 00:05:11.240 --rc genhtml_function_coverage=1 00:05:11.240 --rc genhtml_legend=1 00:05:11.240 --rc geninfo_all_blocks=1 00:05:11.240 --rc geninfo_unexecuted_blocks=1 00:05:11.240 00:05:11.240 ' 00:05:11.240 20:51:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:11.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.240 --rc genhtml_branch_coverage=1 00:05:11.240 --rc genhtml_function_coverage=1 00:05:11.240 --rc genhtml_legend=1 00:05:11.240 --rc geninfo_all_blocks=1 00:05:11.240 --rc geninfo_unexecuted_blocks=1 00:05:11.240 00:05:11.240 ' 00:05:11.240 20:51:32 -- rpc/rpc.sh@65 -- # spdk_pid=56478 00:05:11.240 20:51:32 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:11.240 20:51:32 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.240 20:51:32 -- rpc/rpc.sh@67 -- # waitforlisten 56478 00:05:11.240 20:51:32 -- common/autotest_common.sh@829 -- # '[' -z 56478 ']' 00:05:11.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.240 20:51:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.240 20:51:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.240 20:51:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.240 20:51:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.240 20:51:32 -- common/autotest_common.sh@10 -- # set +x 00:05:11.500 [2024-12-08 20:51:32.282378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:11.500 [2024-12-08 20:51:32.282857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56478 ] 00:05:11.500 [2024-12-08 20:51:32.447962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.759 [2024-12-08 20:51:32.589381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:11.759 [2024-12-08 20:51:32.589799] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:11.759 [2024-12-08 20:51:32.589829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56478' to capture a snapshot of events at runtime. 00:05:11.759 [2024-12-08 20:51:32.589843] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56478 for offline analysis/debug. 00:05:11.759 [2024-12-08 20:51:32.589877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.136 20:51:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.136 20:51:33 -- common/autotest_common.sh@862 -- # return 0 00:05:13.136 20:51:33 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.136 20:51:33 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.136 20:51:33 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:13.136 20:51:33 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:13.136 20:51:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.136 20:51:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.136 20:51:33 -- common/autotest_common.sh@10 -- # set +x 00:05:13.136 ************************************ 00:05:13.136 START TEST rpc_integrity 00:05:13.136 ************************************ 00:05:13.136 20:51:33 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:13.136 20:51:33 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.136 20:51:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.136 20:51:33 -- common/autotest_common.sh@10 -- # set +x 00:05:13.136 20:51:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.136 20:51:33 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.136 20:51:33 -- rpc/rpc.sh@13 -- # jq length 00:05:13.136 20:51:33 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.136 20:51:33 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.136 20:51:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.136 20:51:33 -- common/autotest_common.sh@10 -- # set +x 00:05:13.136 20:51:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.136 20:51:33 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:13.136 20:51:33 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.136 20:51:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.136 20:51:33 -- common/autotest_common.sh@10 -- # set +x 00:05:13.137 20:51:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.137 20:51:33 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.137 { 00:05:13.137 "name": "Malloc0", 00:05:13.137 "aliases": [ 00:05:13.137 "0e75d1f8-58ca-425b-be83-1b78dade46a5" 00:05:13.137 ], 00:05:13.137 "product_name": "Malloc disk", 00:05:13.137 "block_size": 512, 00:05:13.137 "num_blocks": 16384, 00:05:13.137 "uuid": "0e75d1f8-58ca-425b-be83-1b78dade46a5", 00:05:13.137 "assigned_rate_limits": { 00:05:13.137 "rw_ios_per_sec": 0, 00:05:13.137 "rw_mbytes_per_sec": 0, 00:05:13.137 "r_mbytes_per_sec": 0, 00:05:13.137 "w_mbytes_per_sec": 0 00:05:13.137 }, 00:05:13.137 "claimed": false, 00:05:13.137 "zoned": false, 00:05:13.137 "supported_io_types": { 00:05:13.137 "read": true, 00:05:13.137 "write": true, 00:05:13.137 "unmap": true, 00:05:13.137 "write_zeroes": true, 00:05:13.137 "flush": true, 00:05:13.137 "reset": true, 00:05:13.137 "compare": false, 00:05:13.137 "compare_and_write": false, 00:05:13.137 "abort": true, 00:05:13.137 "nvme_admin": false, 00:05:13.137 "nvme_io": false 00:05:13.137 }, 00:05:13.137 "memory_domains": [ 00:05:13.137 { 00:05:13.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.137 "dma_device_type": 2 00:05:13.137 } 00:05:13.137 ], 00:05:13.137 "driver_specific": {} 00:05:13.137 } 00:05:13.137 ]' 00:05:13.137 20:51:33 -- rpc/rpc.sh@17 -- # jq length 00:05:13.137 20:51:33 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.137 20:51:33 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:13.137 20:51:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.137 20:51:33 -- common/autotest_common.sh@10 -- # set +x 00:05:13.137 [2024-12-08 20:51:33.994195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:13.137 [2024-12-08 20:51:33.994277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.137 [2024-12-08 20:51:33.994307] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:13.137 [2024-12-08 20:51:33.994325] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.137 [2024-12-08 20:51:33.996791] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.137 [2024-12-08 20:51:33.996962] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.137 Passthru0 00:05:13.137 20:51:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.137 20:51:33 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.137 20:51:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.137 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.137 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.137 20:51:34 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.137 { 00:05:13.137 "name": "Malloc0", 00:05:13.137 "aliases": [ 00:05:13.137 "0e75d1f8-58ca-425b-be83-1b78dade46a5" 00:05:13.137 ], 00:05:13.137 "product_name": "Malloc disk", 00:05:13.137 "block_size": 512, 00:05:13.137 "num_blocks": 16384, 00:05:13.137 "uuid": "0e75d1f8-58ca-425b-be83-1b78dade46a5", 00:05:13.137 "assigned_rate_limits": { 00:05:13.137 "rw_ios_per_sec": 0, 00:05:13.137 "rw_mbytes_per_sec": 0, 00:05:13.137 "r_mbytes_per_sec": 0, 00:05:13.137 "w_mbytes_per_sec": 0 00:05:13.137 }, 00:05:13.137 "claimed": true, 00:05:13.137 "claim_type": "exclusive_write", 00:05:13.137 "zoned": false, 00:05:13.137 "supported_io_types": { 00:05:13.137 "read": true, 00:05:13.137 "write": true, 00:05:13.137 "unmap": true, 00:05:13.137 "write_zeroes": true, 00:05:13.137 "flush": true, 00:05:13.137 "reset": true, 00:05:13.137 "compare": false, 00:05:13.137 "compare_and_write": false, 00:05:13.137 "abort": true, 00:05:13.137 "nvme_admin": false, 00:05:13.137 "nvme_io": false 00:05:13.137 }, 00:05:13.137 "memory_domains": [ 00:05:13.137 { 00:05:13.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.137 "dma_device_type": 2 00:05:13.137 } 00:05:13.137 ], 00:05:13.137 "driver_specific": {} 00:05:13.137 }, 00:05:13.137 { 00:05:13.137 "name": "Passthru0", 00:05:13.137 "aliases": [ 00:05:13.137 "09026112-0339-591a-92ef-681aa59666ce" 00:05:13.137 ], 00:05:13.137 "product_name": "passthru", 00:05:13.137 "block_size": 512, 00:05:13.137 "num_blocks": 16384, 00:05:13.137 "uuid": "09026112-0339-591a-92ef-681aa59666ce", 00:05:13.137 "assigned_rate_limits": { 00:05:13.137 "rw_ios_per_sec": 0, 00:05:13.137 "rw_mbytes_per_sec": 0, 00:05:13.137 "r_mbytes_per_sec": 0, 00:05:13.137 "w_mbytes_per_sec": 0 00:05:13.137 }, 00:05:13.137 "claimed": false, 00:05:13.137 "zoned": false, 00:05:13.137 "supported_io_types": { 00:05:13.137 "read": true, 00:05:13.137 "write": true, 00:05:13.137 "unmap": true, 00:05:13.137 "write_zeroes": true, 00:05:13.137 "flush": true, 00:05:13.137 "reset": true, 00:05:13.137 "compare": false, 00:05:13.137 "compare_and_write": false, 00:05:13.137 "abort": true, 00:05:13.137 "nvme_admin": false, 00:05:13.137 "nvme_io": false 00:05:13.137 }, 00:05:13.137 "memory_domains": [ 00:05:13.137 { 00:05:13.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.137 "dma_device_type": 2 00:05:13.137 } 00:05:13.137 ], 00:05:13.137 "driver_specific": { 00:05:13.137 "passthru": { 00:05:13.137 "name": "Passthru0", 00:05:13.137 "base_bdev_name": "Malloc0" 00:05:13.137 } 00:05:13.137 } 00:05:13.137 } 00:05:13.137 ]' 00:05:13.137 20:51:34 -- rpc/rpc.sh@21 -- # jq length 00:05:13.137 20:51:34 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.137 20:51:34 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.137 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.137 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.137 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.137 20:51:34 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:13.137 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.137 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.137 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.137 20:51:34 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.137 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.137 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.137 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.137 20:51:34 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.137 20:51:34 -- rpc/rpc.sh@26 -- # jq length 00:05:13.137 ************************************ 00:05:13.137 END TEST rpc_integrity 00:05:13.137 ************************************ 00:05:13.137 20:51:34 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.137 00:05:13.137 real 0m0.338s 00:05:13.137 user 0m0.220s 00:05:13.137 sys 0m0.037s 00:05:13.137 20:51:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.137 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.396 20:51:34 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.396 20:51:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.396 20:51:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.396 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.396 ************************************ 00:05:13.396 START TEST rpc_plugins 00:05:13.396 ************************************ 00:05:13.396 20:51:34 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:13.396 20:51:34 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.396 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.396 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.396 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.396 20:51:34 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.396 20:51:34 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.396 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.396 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.396 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.396 20:51:34 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.396 { 00:05:13.396 "name": "Malloc1", 00:05:13.396 "aliases": [ 00:05:13.396 "d4bd074d-8301-4205-b8b0-5067a8ed6e68" 00:05:13.396 ], 00:05:13.396 "product_name": "Malloc disk", 00:05:13.396 "block_size": 4096, 00:05:13.396 "num_blocks": 256, 00:05:13.396 "uuid": "d4bd074d-8301-4205-b8b0-5067a8ed6e68", 00:05:13.396 "assigned_rate_limits": { 00:05:13.396 "rw_ios_per_sec": 0, 00:05:13.396 "rw_mbytes_per_sec": 0, 00:05:13.396 "r_mbytes_per_sec": 0, 00:05:13.396 "w_mbytes_per_sec": 0 00:05:13.396 }, 00:05:13.396 "claimed": false, 00:05:13.396 "zoned": false, 00:05:13.396 "supported_io_types": { 00:05:13.396 "read": true, 00:05:13.396 "write": true, 00:05:13.396 "unmap": true, 00:05:13.396 "write_zeroes": true, 00:05:13.396 "flush": true, 00:05:13.396 "reset": true, 00:05:13.396 "compare": false, 00:05:13.396 "compare_and_write": false, 00:05:13.396 "abort": true, 00:05:13.396 "nvme_admin": false, 00:05:13.396 "nvme_io": false 00:05:13.396 }, 00:05:13.396 "memory_domains": [ 00:05:13.396 { 00:05:13.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.396 "dma_device_type": 2 00:05:13.396 } 00:05:13.396 ], 00:05:13.396 "driver_specific": {} 00:05:13.396 } 00:05:13.396 ]' 00:05:13.396 20:51:34 -- rpc/rpc.sh@32 -- # jq length 00:05:13.396 20:51:34 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.396 20:51:34 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.396 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.396 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.396 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.396 20:51:34 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.396 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.396 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.396 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.396 20:51:34 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.397 20:51:34 -- rpc/rpc.sh@36 -- # jq length 00:05:13.397 ************************************ 00:05:13.397 END TEST rpc_plugins 00:05:13.397 ************************************ 00:05:13.397 20:51:34 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.397 00:05:13.397 real 0m0.166s 00:05:13.397 user 0m0.105s 00:05:13.397 sys 0m0.021s 00:05:13.397 20:51:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.397 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.655 20:51:34 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.655 20:51:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.655 20:51:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.655 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.655 ************************************ 00:05:13.655 START TEST rpc_trace_cmd_test 00:05:13.655 ************************************ 00:05:13.655 20:51:34 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:13.655 20:51:34 -- rpc/rpc.sh@40 -- # local info 00:05:13.655 20:51:34 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.655 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.655 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.655 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.655 20:51:34 -- rpc/rpc.sh@42 -- # info='{ 00:05:13.655 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56478", 00:05:13.655 "tpoint_group_mask": "0x8", 00:05:13.655 "iscsi_conn": { 00:05:13.655 "mask": "0x2", 00:05:13.655 "tpoint_mask": "0x0" 00:05:13.655 }, 00:05:13.655 "scsi": { 00:05:13.655 "mask": "0x4", 00:05:13.655 "tpoint_mask": "0x0" 00:05:13.655 }, 00:05:13.655 "bdev": { 00:05:13.655 "mask": "0x8", 00:05:13.655 "tpoint_mask": "0xffffffffffffffff" 00:05:13.655 }, 00:05:13.655 "nvmf_rdma": { 00:05:13.655 "mask": "0x10", 00:05:13.655 "tpoint_mask": "0x0" 00:05:13.656 }, 00:05:13.656 "nvmf_tcp": { 00:05:13.656 "mask": "0x20", 00:05:13.656 "tpoint_mask": "0x0" 00:05:13.656 }, 00:05:13.656 "ftl": { 00:05:13.656 "mask": "0x40", 00:05:13.656 "tpoint_mask": "0x0" 00:05:13.656 }, 00:05:13.656 "blobfs": { 00:05:13.656 "mask": "0x80", 00:05:13.656 "tpoint_mask": "0x0" 00:05:13.656 }, 00:05:13.656 "dsa": { 00:05:13.656 "mask": "0x200", 00:05:13.656 "tpoint_mask": "0x0" 00:05:13.656 }, 00:05:13.656 "thread": { 00:05:13.656 "mask": "0x400", 00:05:13.656 "tpoint_mask": "0x0" 00:05:13.656 }, 00:05:13.656 "nvme_pcie": { 00:05:13.656 "mask": "0x800", 00:05:13.656 "tpoint_mask": "0x0" 00:05:13.656 }, 00:05:13.656 "iaa": { 00:05:13.656 "mask": "0x1000", 00:05:13.656 "tpoint_mask": "0x0" 00:05:13.656 }, 00:05:13.656 "nvme_tcp": { 00:05:13.656 "mask": "0x2000", 00:05:13.656 "tpoint_mask": "0x0" 00:05:13.656 }, 00:05:13.656 "bdev_nvme": { 00:05:13.656 "mask": "0x4000", 00:05:13.656 "tpoint_mask": "0x0" 00:05:13.656 } 00:05:13.656 }' 00:05:13.656 20:51:34 -- rpc/rpc.sh@43 -- # jq length 00:05:13.656 20:51:34 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:13.656 20:51:34 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.656 20:51:34 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.656 20:51:34 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.656 20:51:34 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.656 20:51:34 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.656 20:51:34 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.656 20:51:34 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.915 ************************************ 00:05:13.915 END TEST rpc_trace_cmd_test 00:05:13.915 ************************************ 00:05:13.915 20:51:34 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.915 00:05:13.915 real 0m0.274s 00:05:13.915 user 0m0.239s 00:05:13.915 sys 0m0.025s 00:05:13.915 20:51:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.915 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.915 20:51:34 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.915 20:51:34 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.915 20:51:34 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.915 20:51:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.915 20:51:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.915 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.915 ************************************ 00:05:13.915 START TEST rpc_daemon_integrity 00:05:13.915 ************************************ 00:05:13.915 20:51:34 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:13.916 20:51:34 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.916 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.916 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.916 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.916 20:51:34 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.916 20:51:34 -- rpc/rpc.sh@13 -- # jq length 00:05:13.916 20:51:34 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.916 20:51:34 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.916 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.916 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.916 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.916 20:51:34 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.916 20:51:34 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.916 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.916 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.916 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.916 20:51:34 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.916 { 00:05:13.916 "name": "Malloc2", 00:05:13.916 "aliases": [ 00:05:13.916 "a7e69a53-9e29-44e2-8b2e-a12c49b74215" 00:05:13.916 ], 00:05:13.916 "product_name": "Malloc disk", 00:05:13.916 "block_size": 512, 00:05:13.916 "num_blocks": 16384, 00:05:13.916 "uuid": "a7e69a53-9e29-44e2-8b2e-a12c49b74215", 00:05:13.916 "assigned_rate_limits": { 00:05:13.916 "rw_ios_per_sec": 0, 00:05:13.916 "rw_mbytes_per_sec": 0, 00:05:13.916 "r_mbytes_per_sec": 0, 00:05:13.916 "w_mbytes_per_sec": 0 00:05:13.916 }, 00:05:13.916 "claimed": false, 00:05:13.916 "zoned": false, 00:05:13.916 "supported_io_types": { 00:05:13.916 "read": true, 00:05:13.916 "write": true, 00:05:13.916 "unmap": true, 00:05:13.916 "write_zeroes": true, 00:05:13.916 "flush": true, 00:05:13.916 "reset": true, 00:05:13.916 "compare": false, 00:05:13.916 "compare_and_write": false, 00:05:13.916 "abort": true, 00:05:13.916 "nvme_admin": false, 00:05:13.916 "nvme_io": false 00:05:13.916 }, 00:05:13.916 "memory_domains": [ 00:05:13.916 { 00:05:13.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.916 "dma_device_type": 2 00:05:13.916 } 00:05:13.916 ], 00:05:13.916 "driver_specific": {} 00:05:13.916 } 00:05:13.916 ]' 00:05:13.916 20:51:34 -- rpc/rpc.sh@17 -- # jq length 00:05:13.916 20:51:34 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.916 20:51:34 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:13.916 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.916 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.916 [2024-12-08 20:51:34.923687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:13.916 [2024-12-08 20:51:34.923743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.916 [2024-12-08 20:51:34.923766] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:13.916 [2024-12-08 20:51:34.923780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.916 [2024-12-08 20:51:34.926206] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.916 [2024-12-08 20:51:34.926251] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.916 Passthru0 00:05:13.916 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.916 20:51:34 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.916 20:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.916 20:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.916 20:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.175 20:51:34 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.175 { 00:05:14.175 "name": "Malloc2", 00:05:14.175 "aliases": [ 00:05:14.175 "a7e69a53-9e29-44e2-8b2e-a12c49b74215" 00:05:14.175 ], 00:05:14.175 "product_name": "Malloc disk", 00:05:14.175 "block_size": 512, 00:05:14.175 "num_blocks": 16384, 00:05:14.175 "uuid": "a7e69a53-9e29-44e2-8b2e-a12c49b74215", 00:05:14.175 "assigned_rate_limits": { 00:05:14.175 "rw_ios_per_sec": 0, 00:05:14.175 "rw_mbytes_per_sec": 0, 00:05:14.175 "r_mbytes_per_sec": 0, 00:05:14.175 "w_mbytes_per_sec": 0 00:05:14.175 }, 00:05:14.175 "claimed": true, 00:05:14.175 "claim_type": "exclusive_write", 00:05:14.175 "zoned": false, 00:05:14.175 "supported_io_types": { 00:05:14.175 "read": true, 00:05:14.175 "write": true, 00:05:14.175 "unmap": true, 00:05:14.175 "write_zeroes": true, 00:05:14.175 "flush": true, 00:05:14.175 "reset": true, 00:05:14.175 "compare": false, 00:05:14.175 "compare_and_write": false, 00:05:14.175 "abort": true, 00:05:14.175 "nvme_admin": false, 00:05:14.176 "nvme_io": false 00:05:14.176 }, 00:05:14.176 "memory_domains": [ 00:05:14.176 { 00:05:14.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.176 "dma_device_type": 2 00:05:14.176 } 00:05:14.176 ], 00:05:14.176 "driver_specific": {} 00:05:14.176 }, 00:05:14.176 { 00:05:14.176 "name": "Passthru0", 00:05:14.176 "aliases": [ 00:05:14.176 "3d41714c-38f3-5fe3-bb6b-8a35e96e1613" 00:05:14.176 ], 00:05:14.176 "product_name": "passthru", 00:05:14.176 "block_size": 512, 00:05:14.176 "num_blocks": 16384, 00:05:14.176 "uuid": "3d41714c-38f3-5fe3-bb6b-8a35e96e1613", 00:05:14.176 "assigned_rate_limits": { 00:05:14.176 "rw_ios_per_sec": 0, 00:05:14.176 "rw_mbytes_per_sec": 0, 00:05:14.176 "r_mbytes_per_sec": 0, 00:05:14.176 "w_mbytes_per_sec": 0 00:05:14.176 }, 00:05:14.176 "claimed": false, 00:05:14.176 "zoned": false, 00:05:14.176 "supported_io_types": { 00:05:14.176 "read": true, 00:05:14.176 "write": true, 00:05:14.176 "unmap": true, 00:05:14.176 "write_zeroes": true, 00:05:14.176 "flush": true, 00:05:14.176 "reset": true, 00:05:14.176 "compare": false, 00:05:14.176 "compare_and_write": false, 00:05:14.176 "abort": true, 00:05:14.176 "nvme_admin": false, 00:05:14.176 "nvme_io": false 00:05:14.176 }, 00:05:14.176 "memory_domains": [ 00:05:14.176 { 00:05:14.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.176 "dma_device_type": 2 00:05:14.176 } 00:05:14.176 ], 00:05:14.176 "driver_specific": { 00:05:14.176 "passthru": { 00:05:14.176 "name": "Passthru0", 00:05:14.176 "base_bdev_name": "Malloc2" 00:05:14.176 } 00:05:14.176 } 00:05:14.176 } 00:05:14.176 ]' 00:05:14.176 20:51:34 -- rpc/rpc.sh@21 -- # jq length 00:05:14.176 20:51:35 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.176 20:51:35 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.176 20:51:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.176 20:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:14.176 20:51:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.176 20:51:35 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:14.176 20:51:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.176 20:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:14.176 20:51:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.176 20:51:35 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.176 20:51:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.176 20:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:14.176 20:51:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.176 20:51:35 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.176 20:51:35 -- rpc/rpc.sh@26 -- # jq length 00:05:14.176 ************************************ 00:05:14.176 END TEST rpc_daemon_integrity 00:05:14.176 ************************************ 00:05:14.176 20:51:35 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.176 00:05:14.176 real 0m0.337s 00:05:14.176 user 0m0.213s 00:05:14.176 sys 0m0.043s 00:05:14.176 20:51:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.176 20:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:14.176 20:51:35 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:14.176 20:51:35 -- rpc/rpc.sh@84 -- # killprocess 56478 00:05:14.176 20:51:35 -- common/autotest_common.sh@936 -- # '[' -z 56478 ']' 00:05:14.176 20:51:35 -- common/autotest_common.sh@940 -- # kill -0 56478 00:05:14.176 20:51:35 -- common/autotest_common.sh@941 -- # uname 00:05:14.176 20:51:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.176 20:51:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56478 00:05:14.176 killing process with pid 56478 00:05:14.176 20:51:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.176 20:51:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.176 20:51:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56478' 00:05:14.176 20:51:35 -- common/autotest_common.sh@955 -- # kill 56478 00:05:14.176 20:51:35 -- common/autotest_common.sh@960 -- # wait 56478 00:05:16.079 00:05:16.079 real 0m4.808s 00:05:16.079 user 0m5.723s 00:05:16.079 sys 0m0.719s 00:05:16.079 20:51:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.079 ************************************ 00:05:16.079 END TEST rpc 00:05:16.079 ************************************ 00:05:16.079 20:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:16.079 20:51:36 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:16.079 20:51:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.079 20:51:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.079 20:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:16.079 ************************************ 00:05:16.079 START TEST rpc_client 00:05:16.079 ************************************ 00:05:16.079 20:51:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:16.079 * Looking for test storage... 00:05:16.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:16.079 20:51:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:16.079 20:51:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:16.079 20:51:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:16.079 20:51:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:16.079 20:51:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:16.079 20:51:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:16.079 20:51:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:16.079 20:51:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:16.079 20:51:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:16.079 20:51:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.079 20:51:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:16.079 20:51:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:16.079 20:51:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:16.079 20:51:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:16.079 20:51:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:16.079 20:51:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:16.079 20:51:36 -- scripts/common.sh@344 -- # : 1 00:05:16.079 20:51:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:16.079 20:51:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.079 20:51:37 -- scripts/common.sh@364 -- # decimal 1 00:05:16.079 20:51:37 -- scripts/common.sh@352 -- # local d=1 00:05:16.079 20:51:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.079 20:51:37 -- scripts/common.sh@354 -- # echo 1 00:05:16.079 20:51:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:16.079 20:51:37 -- scripts/common.sh@365 -- # decimal 2 00:05:16.079 20:51:37 -- scripts/common.sh@352 -- # local d=2 00:05:16.079 20:51:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.079 20:51:37 -- scripts/common.sh@354 -- # echo 2 00:05:16.079 20:51:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:16.079 20:51:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:16.079 20:51:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:16.079 20:51:37 -- scripts/common.sh@367 -- # return 0 00:05:16.079 20:51:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.079 20:51:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:16.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.079 --rc genhtml_branch_coverage=1 00:05:16.079 --rc genhtml_function_coverage=1 00:05:16.079 --rc genhtml_legend=1 00:05:16.079 --rc geninfo_all_blocks=1 00:05:16.079 --rc geninfo_unexecuted_blocks=1 00:05:16.079 00:05:16.079 ' 00:05:16.079 20:51:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:16.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.079 --rc genhtml_branch_coverage=1 00:05:16.079 --rc genhtml_function_coverage=1 00:05:16.079 --rc genhtml_legend=1 00:05:16.079 --rc geninfo_all_blocks=1 00:05:16.079 --rc geninfo_unexecuted_blocks=1 00:05:16.079 00:05:16.079 ' 00:05:16.079 20:51:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:16.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.079 --rc genhtml_branch_coverage=1 00:05:16.079 --rc genhtml_function_coverage=1 00:05:16.079 --rc genhtml_legend=1 00:05:16.079 --rc geninfo_all_blocks=1 00:05:16.079 --rc geninfo_unexecuted_blocks=1 00:05:16.079 00:05:16.079 ' 00:05:16.079 20:51:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:16.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.079 --rc genhtml_branch_coverage=1 00:05:16.079 --rc genhtml_function_coverage=1 00:05:16.079 --rc genhtml_legend=1 00:05:16.079 --rc geninfo_all_blocks=1 00:05:16.079 --rc geninfo_unexecuted_blocks=1 00:05:16.079 00:05:16.079 ' 00:05:16.079 20:51:37 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:16.079 OK 00:05:16.079 20:51:37 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:16.079 00:05:16.079 real 0m0.246s 00:05:16.079 user 0m0.155s 00:05:16.079 sys 0m0.101s 00:05:16.079 20:51:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.079 20:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:16.079 ************************************ 00:05:16.079 END TEST rpc_client 00:05:16.079 ************************************ 00:05:16.339 20:51:37 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:16.339 20:51:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.339 20:51:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.339 20:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:16.339 ************************************ 00:05:16.339 START TEST json_config 00:05:16.339 ************************************ 00:05:16.339 20:51:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:16.339 20:51:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:16.339 20:51:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:16.339 20:51:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:16.339 20:51:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:16.339 20:51:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:16.339 20:51:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:16.339 20:51:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:16.339 20:51:37 -- scripts/common.sh@335 -- # IFS=.-: 00:05:16.339 20:51:37 -- scripts/common.sh@335 -- # read -ra ver1 00:05:16.339 20:51:37 -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.339 20:51:37 -- scripts/common.sh@336 -- # read -ra ver2 00:05:16.339 20:51:37 -- scripts/common.sh@337 -- # local 'op=<' 00:05:16.339 20:51:37 -- scripts/common.sh@339 -- # ver1_l=2 00:05:16.339 20:51:37 -- scripts/common.sh@340 -- # ver2_l=1 00:05:16.339 20:51:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:16.339 20:51:37 -- scripts/common.sh@343 -- # case "$op" in 00:05:16.339 20:51:37 -- scripts/common.sh@344 -- # : 1 00:05:16.339 20:51:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:16.339 20:51:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.339 20:51:37 -- scripts/common.sh@364 -- # decimal 1 00:05:16.339 20:51:37 -- scripts/common.sh@352 -- # local d=1 00:05:16.339 20:51:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.339 20:51:37 -- scripts/common.sh@354 -- # echo 1 00:05:16.339 20:51:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:16.339 20:51:37 -- scripts/common.sh@365 -- # decimal 2 00:05:16.339 20:51:37 -- scripts/common.sh@352 -- # local d=2 00:05:16.339 20:51:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.339 20:51:37 -- scripts/common.sh@354 -- # echo 2 00:05:16.339 20:51:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:16.339 20:51:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:16.339 20:51:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:16.339 20:51:37 -- scripts/common.sh@367 -- # return 0 00:05:16.339 20:51:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.339 20:51:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:16.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.339 --rc genhtml_branch_coverage=1 00:05:16.339 --rc genhtml_function_coverage=1 00:05:16.339 --rc genhtml_legend=1 00:05:16.339 --rc geninfo_all_blocks=1 00:05:16.339 --rc geninfo_unexecuted_blocks=1 00:05:16.339 00:05:16.339 ' 00:05:16.339 20:51:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:16.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.339 --rc genhtml_branch_coverage=1 00:05:16.339 --rc genhtml_function_coverage=1 00:05:16.339 --rc genhtml_legend=1 00:05:16.339 --rc geninfo_all_blocks=1 00:05:16.339 --rc geninfo_unexecuted_blocks=1 00:05:16.339 00:05:16.339 ' 00:05:16.339 20:51:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:16.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.339 --rc genhtml_branch_coverage=1 00:05:16.339 --rc genhtml_function_coverage=1 00:05:16.339 --rc genhtml_legend=1 00:05:16.339 --rc geninfo_all_blocks=1 00:05:16.339 --rc geninfo_unexecuted_blocks=1 00:05:16.339 00:05:16.339 ' 00:05:16.339 20:51:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:16.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.339 --rc genhtml_branch_coverage=1 00:05:16.339 --rc genhtml_function_coverage=1 00:05:16.339 --rc genhtml_legend=1 00:05:16.339 --rc geninfo_all_blocks=1 00:05:16.339 --rc geninfo_unexecuted_blocks=1 00:05:16.339 00:05:16.339 ' 00:05:16.339 20:51:37 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:16.339 20:51:37 -- nvmf/common.sh@7 -- # uname -s 00:05:16.339 20:51:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.339 20:51:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.339 20:51:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.339 20:51:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.339 20:51:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.339 20:51:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.339 20:51:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.339 20:51:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.339 20:51:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.339 20:51:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.339 20:51:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c908302c-1db1-47eb-b733-054d9c59ff03 00:05:16.339 20:51:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=c908302c-1db1-47eb-b733-054d9c59ff03 00:05:16.339 20:51:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.339 20:51:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.339 20:51:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.339 20:51:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:16.339 20:51:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.339 20:51:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.339 20:51:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.339 20:51:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.339 20:51:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.339 20:51:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.339 20:51:37 -- paths/export.sh@5 -- # export PATH 00:05:16.339 20:51:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.339 20:51:37 -- nvmf/common.sh@46 -- # : 0 00:05:16.339 20:51:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:16.339 20:51:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:16.339 20:51:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:16.339 20:51:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.339 20:51:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.339 20:51:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:16.339 20:51:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:16.339 20:51:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:16.339 20:51:37 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:16.339 WARNING: No tests are enabled so not running JSON configuration tests 00:05:16.339 20:51:37 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:16.339 20:51:37 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:16.339 20:51:37 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:16.339 20:51:37 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:16.339 20:51:37 -- json_config/json_config.sh@27 -- # exit 0 00:05:16.339 ************************************ 00:05:16.339 END TEST json_config 00:05:16.339 ************************************ 00:05:16.339 00:05:16.339 real 0m0.171s 00:05:16.339 user 0m0.114s 00:05:16.339 sys 0m0.062s 00:05:16.339 20:51:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.339 20:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:16.339 20:51:37 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:16.340 20:51:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.340 20:51:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.340 20:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:16.340 ************************************ 00:05:16.340 START TEST json_config_extra_key 00:05:16.340 ************************************ 00:05:16.340 20:51:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:16.599 20:51:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:16.599 20:51:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:16.599 20:51:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:16.599 20:51:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:16.599 20:51:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:16.599 20:51:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:16.599 20:51:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:16.599 20:51:37 -- scripts/common.sh@335 -- # IFS=.-: 00:05:16.599 20:51:37 -- scripts/common.sh@335 -- # read -ra ver1 00:05:16.599 20:51:37 -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.599 20:51:37 -- scripts/common.sh@336 -- # read -ra ver2 00:05:16.599 20:51:37 -- scripts/common.sh@337 -- # local 'op=<' 00:05:16.599 20:51:37 -- scripts/common.sh@339 -- # ver1_l=2 00:05:16.599 20:51:37 -- scripts/common.sh@340 -- # ver2_l=1 00:05:16.599 20:51:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:16.599 20:51:37 -- scripts/common.sh@343 -- # case "$op" in 00:05:16.599 20:51:37 -- scripts/common.sh@344 -- # : 1 00:05:16.599 20:51:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:16.599 20:51:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.599 20:51:37 -- scripts/common.sh@364 -- # decimal 1 00:05:16.599 20:51:37 -- scripts/common.sh@352 -- # local d=1 00:05:16.599 20:51:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.599 20:51:37 -- scripts/common.sh@354 -- # echo 1 00:05:16.599 20:51:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:16.599 20:51:37 -- scripts/common.sh@365 -- # decimal 2 00:05:16.599 20:51:37 -- scripts/common.sh@352 -- # local d=2 00:05:16.599 20:51:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.599 20:51:37 -- scripts/common.sh@354 -- # echo 2 00:05:16.599 20:51:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:16.599 20:51:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:16.599 20:51:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:16.599 20:51:37 -- scripts/common.sh@367 -- # return 0 00:05:16.600 20:51:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.600 20:51:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:16.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.600 --rc genhtml_branch_coverage=1 00:05:16.600 --rc genhtml_function_coverage=1 00:05:16.600 --rc genhtml_legend=1 00:05:16.600 --rc geninfo_all_blocks=1 00:05:16.600 --rc geninfo_unexecuted_blocks=1 00:05:16.600 00:05:16.600 ' 00:05:16.600 20:51:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:16.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.600 --rc genhtml_branch_coverage=1 00:05:16.600 --rc genhtml_function_coverage=1 00:05:16.600 --rc genhtml_legend=1 00:05:16.600 --rc geninfo_all_blocks=1 00:05:16.600 --rc geninfo_unexecuted_blocks=1 00:05:16.600 00:05:16.600 ' 00:05:16.600 20:51:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:16.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.600 --rc genhtml_branch_coverage=1 00:05:16.600 --rc genhtml_function_coverage=1 00:05:16.600 --rc genhtml_legend=1 00:05:16.600 --rc geninfo_all_blocks=1 00:05:16.600 --rc geninfo_unexecuted_blocks=1 00:05:16.600 00:05:16.600 ' 00:05:16.600 20:51:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:16.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.600 --rc genhtml_branch_coverage=1 00:05:16.600 --rc genhtml_function_coverage=1 00:05:16.600 --rc genhtml_legend=1 00:05:16.600 --rc geninfo_all_blocks=1 00:05:16.600 --rc geninfo_unexecuted_blocks=1 00:05:16.600 00:05:16.600 ' 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:16.600 20:51:37 -- nvmf/common.sh@7 -- # uname -s 00:05:16.600 20:51:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.600 20:51:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.600 20:51:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.600 20:51:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.600 20:51:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.600 20:51:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.600 20:51:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.600 20:51:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.600 20:51:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.600 20:51:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.600 20:51:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c908302c-1db1-47eb-b733-054d9c59ff03 00:05:16.600 20:51:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=c908302c-1db1-47eb-b733-054d9c59ff03 00:05:16.600 20:51:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.600 20:51:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.600 20:51:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.600 20:51:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:16.600 20:51:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.600 20:51:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.600 20:51:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.600 20:51:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.600 20:51:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.600 20:51:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.600 20:51:37 -- paths/export.sh@5 -- # export PATH 00:05:16.600 20:51:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.600 20:51:37 -- nvmf/common.sh@46 -- # : 0 00:05:16.600 20:51:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:16.600 20:51:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:16.600 20:51:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:16.600 20:51:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.600 20:51:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.600 20:51:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:16.600 20:51:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:16.600 20:51:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.600 INFO: launching applications... 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56802 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:16.600 Waiting for target to run... 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:16.600 20:51:37 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56802 /var/tmp/spdk_tgt.sock 00:05:16.600 20:51:37 -- common/autotest_common.sh@829 -- # '[' -z 56802 ']' 00:05:16.600 20:51:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.600 20:51:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.600 20:51:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.600 20:51:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.600 20:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:16.859 [2024-12-08 20:51:37.646122] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.859 [2024-12-08 20:51:37.646293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56802 ] 00:05:17.119 [2024-12-08 20:51:37.987877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.378 [2024-12-08 20:51:38.194673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.378 [2024-12-08 20:51:38.194975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.313 20:51:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.313 00:05:18.313 20:51:39 -- common/autotest_common.sh@862 -- # return 0 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:18.313 INFO: shutting down applications... 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56802 ]] 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56802 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56802 00:05:18.313 20:51:39 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:18.882 20:51:39 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:18.882 20:51:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:18.882 20:51:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56802 00:05:18.882 20:51:39 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:19.451 20:51:40 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:19.451 20:51:40 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:19.451 20:51:40 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56802 00:05:19.451 20:51:40 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:20.020 20:51:40 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:20.020 20:51:40 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:20.020 20:51:40 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56802 00:05:20.020 20:51:40 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:20.588 20:51:41 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:20.588 20:51:41 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:20.588 20:51:41 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56802 00:05:20.588 20:51:41 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:20.588 20:51:41 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:20.588 20:51:41 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:20.588 SPDK target shutdown done 00:05:20.588 20:51:41 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:20.588 Success 00:05:20.588 20:51:41 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:20.588 00:05:20.588 real 0m3.979s 00:05:20.588 user 0m3.985s 00:05:20.588 sys 0m0.496s 00:05:20.589 20:51:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.589 ************************************ 00:05:20.589 20:51:41 -- common/autotest_common.sh@10 -- # set +x 00:05:20.589 END TEST json_config_extra_key 00:05:20.589 ************************************ 00:05:20.589 20:51:41 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.589 20:51:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.589 20:51:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.589 20:51:41 -- common/autotest_common.sh@10 -- # set +x 00:05:20.589 ************************************ 00:05:20.589 START TEST alias_rpc 00:05:20.589 ************************************ 00:05:20.589 20:51:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.589 * Looking for test storage... 00:05:20.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:20.589 20:51:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:20.589 20:51:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:20.589 20:51:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:20.589 20:51:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:20.589 20:51:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:20.589 20:51:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:20.589 20:51:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:20.589 20:51:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:20.589 20:51:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:20.589 20:51:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.589 20:51:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:20.589 20:51:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:20.589 20:51:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:20.589 20:51:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:20.589 20:51:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:20.589 20:51:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:20.589 20:51:41 -- scripts/common.sh@344 -- # : 1 00:05:20.589 20:51:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:20.589 20:51:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.589 20:51:41 -- scripts/common.sh@364 -- # decimal 1 00:05:20.589 20:51:41 -- scripts/common.sh@352 -- # local d=1 00:05:20.589 20:51:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.589 20:51:41 -- scripts/common.sh@354 -- # echo 1 00:05:20.589 20:51:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:20.589 20:51:41 -- scripts/common.sh@365 -- # decimal 2 00:05:20.589 20:51:41 -- scripts/common.sh@352 -- # local d=2 00:05:20.589 20:51:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.589 20:51:41 -- scripts/common.sh@354 -- # echo 2 00:05:20.589 20:51:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:20.589 20:51:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:20.589 20:51:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:20.589 20:51:41 -- scripts/common.sh@367 -- # return 0 00:05:20.589 20:51:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.589 20:51:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:20.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.589 --rc genhtml_branch_coverage=1 00:05:20.589 --rc genhtml_function_coverage=1 00:05:20.589 --rc genhtml_legend=1 00:05:20.589 --rc geninfo_all_blocks=1 00:05:20.589 --rc geninfo_unexecuted_blocks=1 00:05:20.589 00:05:20.589 ' 00:05:20.589 20:51:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:20.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.589 --rc genhtml_branch_coverage=1 00:05:20.589 --rc genhtml_function_coverage=1 00:05:20.589 --rc genhtml_legend=1 00:05:20.589 --rc geninfo_all_blocks=1 00:05:20.589 --rc geninfo_unexecuted_blocks=1 00:05:20.589 00:05:20.589 ' 00:05:20.589 20:51:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:20.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.589 --rc genhtml_branch_coverage=1 00:05:20.589 --rc genhtml_function_coverage=1 00:05:20.589 --rc genhtml_legend=1 00:05:20.589 --rc geninfo_all_blocks=1 00:05:20.589 --rc geninfo_unexecuted_blocks=1 00:05:20.589 00:05:20.589 ' 00:05:20.589 20:51:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:20.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.589 --rc genhtml_branch_coverage=1 00:05:20.589 --rc genhtml_function_coverage=1 00:05:20.589 --rc genhtml_legend=1 00:05:20.589 --rc geninfo_all_blocks=1 00:05:20.589 --rc geninfo_unexecuted_blocks=1 00:05:20.589 00:05:20.589 ' 00:05:20.589 20:51:41 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.589 20:51:41 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56906 00:05:20.589 20:51:41 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.589 20:51:41 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56906 00:05:20.589 20:51:41 -- common/autotest_common.sh@829 -- # '[' -z 56906 ']' 00:05:20.589 20:51:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.589 20:51:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.589 20:51:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.589 20:51:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.589 20:51:41 -- common/autotest_common.sh@10 -- # set +x 00:05:20.848 [2024-12-08 20:51:41.666119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:20.848 [2024-12-08 20:51:41.666261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56906 ] 00:05:20.848 [2024-12-08 20:51:41.812774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.107 [2024-12-08 20:51:41.957227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.107 [2024-12-08 20:51:41.957462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.675 20:51:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.675 20:51:42 -- common/autotest_common.sh@862 -- # return 0 00:05:21.675 20:51:42 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:21.933 20:51:42 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56906 00:05:21.933 20:51:42 -- common/autotest_common.sh@936 -- # '[' -z 56906 ']' 00:05:21.933 20:51:42 -- common/autotest_common.sh@940 -- # kill -0 56906 00:05:21.933 20:51:42 -- common/autotest_common.sh@941 -- # uname 00:05:21.933 20:51:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.933 20:51:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56906 00:05:21.933 20:51:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.933 killing process with pid 56906 00:05:21.933 20:51:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.933 20:51:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56906' 00:05:21.933 20:51:42 -- common/autotest_common.sh@955 -- # kill 56906 00:05:21.933 20:51:42 -- common/autotest_common.sh@960 -- # wait 56906 00:05:23.856 00:05:23.856 real 0m3.135s 00:05:23.856 user 0m3.409s 00:05:23.856 sys 0m0.413s 00:05:23.856 20:51:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.856 ************************************ 00:05:23.856 END TEST alias_rpc 00:05:23.856 ************************************ 00:05:23.856 20:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:23.856 20:51:44 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:23.856 20:51:44 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:23.856 20:51:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.856 20:51:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.856 20:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:23.856 ************************************ 00:05:23.856 START TEST spdkcli_tcp 00:05:23.857 ************************************ 00:05:23.857 20:51:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:23.857 * Looking for test storage... 00:05:23.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:23.857 20:51:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:23.857 20:51:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:23.857 20:51:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:23.857 20:51:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:23.857 20:51:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:23.857 20:51:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:23.857 20:51:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:23.857 20:51:44 -- scripts/common.sh@335 -- # IFS=.-: 00:05:23.857 20:51:44 -- scripts/common.sh@335 -- # read -ra ver1 00:05:23.857 20:51:44 -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.857 20:51:44 -- scripts/common.sh@336 -- # read -ra ver2 00:05:23.857 20:51:44 -- scripts/common.sh@337 -- # local 'op=<' 00:05:23.857 20:51:44 -- scripts/common.sh@339 -- # ver1_l=2 00:05:23.857 20:51:44 -- scripts/common.sh@340 -- # ver2_l=1 00:05:23.857 20:51:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:23.857 20:51:44 -- scripts/common.sh@343 -- # case "$op" in 00:05:23.857 20:51:44 -- scripts/common.sh@344 -- # : 1 00:05:23.857 20:51:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:23.857 20:51:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.857 20:51:44 -- scripts/common.sh@364 -- # decimal 1 00:05:23.857 20:51:44 -- scripts/common.sh@352 -- # local d=1 00:05:23.857 20:51:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.857 20:51:44 -- scripts/common.sh@354 -- # echo 1 00:05:23.857 20:51:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:23.857 20:51:44 -- scripts/common.sh@365 -- # decimal 2 00:05:23.857 20:51:44 -- scripts/common.sh@352 -- # local d=2 00:05:23.857 20:51:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.857 20:51:44 -- scripts/common.sh@354 -- # echo 2 00:05:23.857 20:51:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:23.857 20:51:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:23.857 20:51:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:23.857 20:51:44 -- scripts/common.sh@367 -- # return 0 00:05:23.857 20:51:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.857 20:51:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:23.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.857 --rc genhtml_branch_coverage=1 00:05:23.857 --rc genhtml_function_coverage=1 00:05:23.857 --rc genhtml_legend=1 00:05:23.857 --rc geninfo_all_blocks=1 00:05:23.857 --rc geninfo_unexecuted_blocks=1 00:05:23.857 00:05:23.857 ' 00:05:23.857 20:51:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:23.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.857 --rc genhtml_branch_coverage=1 00:05:23.857 --rc genhtml_function_coverage=1 00:05:23.857 --rc genhtml_legend=1 00:05:23.857 --rc geninfo_all_blocks=1 00:05:23.857 --rc geninfo_unexecuted_blocks=1 00:05:23.857 00:05:23.857 ' 00:05:23.857 20:51:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:23.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.857 --rc genhtml_branch_coverage=1 00:05:23.857 --rc genhtml_function_coverage=1 00:05:23.857 --rc genhtml_legend=1 00:05:23.857 --rc geninfo_all_blocks=1 00:05:23.857 --rc geninfo_unexecuted_blocks=1 00:05:23.857 00:05:23.857 ' 00:05:23.857 20:51:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:23.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.857 --rc genhtml_branch_coverage=1 00:05:23.857 --rc genhtml_function_coverage=1 00:05:23.857 --rc genhtml_legend=1 00:05:23.857 --rc geninfo_all_blocks=1 00:05:23.857 --rc geninfo_unexecuted_blocks=1 00:05:23.857 00:05:23.857 ' 00:05:23.857 20:51:44 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:23.857 20:51:44 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:23.857 20:51:44 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:23.857 20:51:44 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:23.857 20:51:44 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:23.857 20:51:44 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:23.857 20:51:44 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:23.857 20:51:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.857 20:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:23.857 20:51:44 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57002 00:05:23.857 20:51:44 -- spdkcli/tcp.sh@27 -- # waitforlisten 57002 00:05:23.857 20:51:44 -- common/autotest_common.sh@829 -- # '[' -z 57002 ']' 00:05:23.857 20:51:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.857 20:51:44 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:23.857 20:51:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.857 20:51:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.857 20:51:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.857 20:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:23.857 [2024-12-08 20:51:44.868337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.857 [2024-12-08 20:51:44.868514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57002 ] 00:05:24.115 [2024-12-08 20:51:45.035336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.373 [2024-12-08 20:51:45.184204] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.374 [2024-12-08 20:51:45.184577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.374 [2024-12-08 20:51:45.184585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.748 20:51:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.748 20:51:46 -- common/autotest_common.sh@862 -- # return 0 00:05:25.748 20:51:46 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:25.748 20:51:46 -- spdkcli/tcp.sh@31 -- # socat_pid=57026 00:05:25.748 20:51:46 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:25.748 [ 00:05:25.748 "bdev_malloc_delete", 00:05:25.748 "bdev_malloc_create", 00:05:25.748 "bdev_null_resize", 00:05:25.748 "bdev_null_delete", 00:05:25.748 "bdev_null_create", 00:05:25.748 "bdev_nvme_cuse_unregister", 00:05:25.748 "bdev_nvme_cuse_register", 00:05:25.748 "bdev_opal_new_user", 00:05:25.748 "bdev_opal_set_lock_state", 00:05:25.748 "bdev_opal_delete", 00:05:25.748 "bdev_opal_get_info", 00:05:25.748 "bdev_opal_create", 00:05:25.748 "bdev_nvme_opal_revert", 00:05:25.748 "bdev_nvme_opal_init", 00:05:25.748 "bdev_nvme_send_cmd", 00:05:25.748 "bdev_nvme_get_path_iostat", 00:05:25.748 "bdev_nvme_get_mdns_discovery_info", 00:05:25.748 "bdev_nvme_stop_mdns_discovery", 00:05:25.748 "bdev_nvme_start_mdns_discovery", 00:05:25.748 "bdev_nvme_set_multipath_policy", 00:05:25.748 "bdev_nvme_set_preferred_path", 00:05:25.748 "bdev_nvme_get_io_paths", 00:05:25.748 "bdev_nvme_remove_error_injection", 00:05:25.748 "bdev_nvme_add_error_injection", 00:05:25.748 "bdev_nvme_get_discovery_info", 00:05:25.748 "bdev_nvme_stop_discovery", 00:05:25.748 "bdev_nvme_start_discovery", 00:05:25.748 "bdev_nvme_get_controller_health_info", 00:05:25.748 "bdev_nvme_disable_controller", 00:05:25.748 "bdev_nvme_enable_controller", 00:05:25.748 "bdev_nvme_reset_controller", 00:05:25.748 "bdev_nvme_get_transport_statistics", 00:05:25.748 "bdev_nvme_apply_firmware", 00:05:25.748 "bdev_nvme_detach_controller", 00:05:25.748 "bdev_nvme_get_controllers", 00:05:25.748 "bdev_nvme_attach_controller", 00:05:25.748 "bdev_nvme_set_hotplug", 00:05:25.748 "bdev_nvme_set_options", 00:05:25.748 "bdev_passthru_delete", 00:05:25.748 "bdev_passthru_create", 00:05:25.748 "bdev_lvol_grow_lvstore", 00:05:25.748 "bdev_lvol_get_lvols", 00:05:25.748 "bdev_lvol_get_lvstores", 00:05:25.748 "bdev_lvol_delete", 00:05:25.748 "bdev_lvol_set_read_only", 00:05:25.748 "bdev_lvol_resize", 00:05:25.748 "bdev_lvol_decouple_parent", 00:05:25.748 "bdev_lvol_inflate", 00:05:25.748 "bdev_lvol_rename", 00:05:25.748 "bdev_lvol_clone_bdev", 00:05:25.748 "bdev_lvol_clone", 00:05:25.748 "bdev_lvol_snapshot", 00:05:25.748 "bdev_lvol_create", 00:05:25.748 "bdev_lvol_delete_lvstore", 00:05:25.748 "bdev_lvol_rename_lvstore", 00:05:25.748 "bdev_lvol_create_lvstore", 00:05:25.748 "bdev_raid_set_options", 00:05:25.748 "bdev_raid_remove_base_bdev", 00:05:25.748 "bdev_raid_add_base_bdev", 00:05:25.748 "bdev_raid_delete", 00:05:25.748 "bdev_raid_create", 00:05:25.748 "bdev_raid_get_bdevs", 00:05:25.748 "bdev_error_inject_error", 00:05:25.748 "bdev_error_delete", 00:05:25.748 "bdev_error_create", 00:05:25.748 "bdev_split_delete", 00:05:25.748 "bdev_split_create", 00:05:25.748 "bdev_delay_delete", 00:05:25.748 "bdev_delay_create", 00:05:25.748 "bdev_delay_update_latency", 00:05:25.748 "bdev_zone_block_delete", 00:05:25.748 "bdev_zone_block_create", 00:05:25.748 "blobfs_create", 00:05:25.748 "blobfs_detect", 00:05:25.748 "blobfs_set_cache_size", 00:05:25.748 "bdev_xnvme_delete", 00:05:25.748 "bdev_xnvme_create", 00:05:25.748 "bdev_aio_delete", 00:05:25.748 "bdev_aio_rescan", 00:05:25.748 "bdev_aio_create", 00:05:25.748 "bdev_ftl_set_property", 00:05:25.748 "bdev_ftl_get_properties", 00:05:25.748 "bdev_ftl_get_stats", 00:05:25.748 "bdev_ftl_unmap", 00:05:25.748 "bdev_ftl_unload", 00:05:25.748 "bdev_ftl_delete", 00:05:25.748 "bdev_ftl_load", 00:05:25.748 "bdev_ftl_create", 00:05:25.748 "bdev_virtio_attach_controller", 00:05:25.748 "bdev_virtio_scsi_get_devices", 00:05:25.749 "bdev_virtio_detach_controller", 00:05:25.749 "bdev_virtio_blk_set_hotplug", 00:05:25.749 "bdev_iscsi_delete", 00:05:25.749 "bdev_iscsi_create", 00:05:25.749 "bdev_iscsi_set_options", 00:05:25.749 "accel_error_inject_error", 00:05:25.749 "ioat_scan_accel_module", 00:05:25.749 "dsa_scan_accel_module", 00:05:25.749 "iaa_scan_accel_module", 00:05:25.749 "iscsi_set_options", 00:05:25.749 "iscsi_get_auth_groups", 00:05:25.749 "iscsi_auth_group_remove_secret", 00:05:25.749 "iscsi_auth_group_add_secret", 00:05:25.749 "iscsi_delete_auth_group", 00:05:25.749 "iscsi_create_auth_group", 00:05:25.749 "iscsi_set_discovery_auth", 00:05:25.749 "iscsi_get_options", 00:05:25.749 "iscsi_target_node_request_logout", 00:05:25.749 "iscsi_target_node_set_redirect", 00:05:25.749 "iscsi_target_node_set_auth", 00:05:25.749 "iscsi_target_node_add_lun", 00:05:25.749 "iscsi_get_connections", 00:05:25.749 "iscsi_portal_group_set_auth", 00:05:25.749 "iscsi_start_portal_group", 00:05:25.749 "iscsi_delete_portal_group", 00:05:25.749 "iscsi_create_portal_group", 00:05:25.749 "iscsi_get_portal_groups", 00:05:25.749 "iscsi_delete_target_node", 00:05:25.749 "iscsi_target_node_remove_pg_ig_maps", 00:05:25.749 "iscsi_target_node_add_pg_ig_maps", 00:05:25.749 "iscsi_create_target_node", 00:05:25.749 "iscsi_get_target_nodes", 00:05:25.749 "iscsi_delete_initiator_group", 00:05:25.749 "iscsi_initiator_group_remove_initiators", 00:05:25.749 "iscsi_initiator_group_add_initiators", 00:05:25.749 "iscsi_create_initiator_group", 00:05:25.749 "iscsi_get_initiator_groups", 00:05:25.749 "nvmf_set_crdt", 00:05:25.749 "nvmf_set_config", 00:05:25.749 "nvmf_set_max_subsystems", 00:05:25.749 "nvmf_subsystem_get_listeners", 00:05:25.749 "nvmf_subsystem_get_qpairs", 00:05:25.749 "nvmf_subsystem_get_controllers", 00:05:25.749 "nvmf_get_stats", 00:05:25.749 "nvmf_get_transports", 00:05:25.749 "nvmf_create_transport", 00:05:25.749 "nvmf_get_targets", 00:05:25.749 "nvmf_delete_target", 00:05:25.749 "nvmf_create_target", 00:05:25.749 "nvmf_subsystem_allow_any_host", 00:05:25.749 "nvmf_subsystem_remove_host", 00:05:25.749 "nvmf_subsystem_add_host", 00:05:25.749 "nvmf_subsystem_remove_ns", 00:05:25.749 "nvmf_subsystem_add_ns", 00:05:25.749 "nvmf_subsystem_listener_set_ana_state", 00:05:25.749 "nvmf_discovery_get_referrals", 00:05:25.749 "nvmf_discovery_remove_referral", 00:05:25.749 "nvmf_discovery_add_referral", 00:05:25.749 "nvmf_subsystem_remove_listener", 00:05:25.749 "nvmf_subsystem_add_listener", 00:05:25.749 "nvmf_delete_subsystem", 00:05:25.749 "nvmf_create_subsystem", 00:05:25.749 "nvmf_get_subsystems", 00:05:25.749 "env_dpdk_get_mem_stats", 00:05:25.749 "nbd_get_disks", 00:05:25.749 "nbd_stop_disk", 00:05:25.749 "nbd_start_disk", 00:05:25.749 "ublk_recover_disk", 00:05:25.749 "ublk_get_disks", 00:05:25.749 "ublk_stop_disk", 00:05:25.749 "ublk_start_disk", 00:05:25.749 "ublk_destroy_target", 00:05:25.749 "ublk_create_target", 00:05:25.749 "virtio_blk_create_transport", 00:05:25.749 "virtio_blk_get_transports", 00:05:25.749 "vhost_controller_set_coalescing", 00:05:25.749 "vhost_get_controllers", 00:05:25.749 "vhost_delete_controller", 00:05:25.749 "vhost_create_blk_controller", 00:05:25.749 "vhost_scsi_controller_remove_target", 00:05:25.749 "vhost_scsi_controller_add_target", 00:05:25.749 "vhost_start_scsi_controller", 00:05:25.749 "vhost_create_scsi_controller", 00:05:25.749 "thread_set_cpumask", 00:05:25.749 "framework_get_scheduler", 00:05:25.749 "framework_set_scheduler", 00:05:25.749 "framework_get_reactors", 00:05:25.749 "thread_get_io_channels", 00:05:25.749 "thread_get_pollers", 00:05:25.749 "thread_get_stats", 00:05:25.749 "framework_monitor_context_switch", 00:05:25.749 "spdk_kill_instance", 00:05:25.749 "log_enable_timestamps", 00:05:25.749 "log_get_flags", 00:05:25.749 "log_clear_flag", 00:05:25.749 "log_set_flag", 00:05:25.749 "log_get_level", 00:05:25.749 "log_set_level", 00:05:25.749 "log_get_print_level", 00:05:25.749 "log_set_print_level", 00:05:25.749 "framework_enable_cpumask_locks", 00:05:25.749 "framework_disable_cpumask_locks", 00:05:25.749 "framework_wait_init", 00:05:25.749 "framework_start_init", 00:05:25.749 "scsi_get_devices", 00:05:25.749 "bdev_get_histogram", 00:05:25.749 "bdev_enable_histogram", 00:05:25.749 "bdev_set_qos_limit", 00:05:25.749 "bdev_set_qd_sampling_period", 00:05:25.749 "bdev_get_bdevs", 00:05:25.749 "bdev_reset_iostat", 00:05:25.749 "bdev_get_iostat", 00:05:25.749 "bdev_examine", 00:05:25.749 "bdev_wait_for_examine", 00:05:25.749 "bdev_set_options", 00:05:25.749 "notify_get_notifications", 00:05:25.749 "notify_get_types", 00:05:25.749 "accel_get_stats", 00:05:25.749 "accel_set_options", 00:05:25.749 "accel_set_driver", 00:05:25.749 "accel_crypto_key_destroy", 00:05:25.749 "accel_crypto_keys_get", 00:05:25.749 "accel_crypto_key_create", 00:05:25.749 "accel_assign_opc", 00:05:25.749 "accel_get_module_info", 00:05:25.749 "accel_get_opc_assignments", 00:05:25.749 "vmd_rescan", 00:05:25.749 "vmd_remove_device", 00:05:25.749 "vmd_enable", 00:05:25.749 "sock_set_default_impl", 00:05:25.749 "sock_impl_set_options", 00:05:25.749 "sock_impl_get_options", 00:05:25.749 "iobuf_get_stats", 00:05:25.749 "iobuf_set_options", 00:05:25.749 "framework_get_pci_devices", 00:05:25.749 "framework_get_config", 00:05:25.749 "framework_get_subsystems", 00:05:25.749 "trace_get_info", 00:05:25.749 "trace_get_tpoint_group_mask", 00:05:25.749 "trace_disable_tpoint_group", 00:05:25.749 "trace_enable_tpoint_group", 00:05:25.749 "trace_clear_tpoint_mask", 00:05:25.749 "trace_set_tpoint_mask", 00:05:25.749 "spdk_get_version", 00:05:25.749 "rpc_get_methods" 00:05:25.749 ] 00:05:25.749 20:51:46 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:25.749 20:51:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.749 20:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:25.749 20:51:46 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:25.749 20:51:46 -- spdkcli/tcp.sh@38 -- # killprocess 57002 00:05:25.749 20:51:46 -- common/autotest_common.sh@936 -- # '[' -z 57002 ']' 00:05:25.749 20:51:46 -- common/autotest_common.sh@940 -- # kill -0 57002 00:05:25.749 20:51:46 -- common/autotest_common.sh@941 -- # uname 00:05:25.749 20:51:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.749 20:51:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57002 00:05:25.749 20:51:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.749 20:51:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.749 killing process with pid 57002 00:05:25.749 20:51:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57002' 00:05:25.749 20:51:46 -- common/autotest_common.sh@955 -- # kill 57002 00:05:25.749 20:51:46 -- common/autotest_common.sh@960 -- # wait 57002 00:05:27.656 00:05:27.656 real 0m3.760s 00:05:27.656 user 0m6.907s 00:05:27.656 sys 0m0.488s 00:05:27.656 20:51:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.656 ************************************ 00:05:27.656 END TEST spdkcli_tcp 00:05:27.656 ************************************ 00:05:27.656 20:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:27.656 20:51:48 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.656 20:51:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.656 20:51:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.656 20:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:27.656 ************************************ 00:05:27.656 START TEST dpdk_mem_utility 00:05:27.656 ************************************ 00:05:27.656 20:51:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.656 * Looking for test storage... 00:05:27.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:27.656 20:51:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.656 20:51:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.656 20:51:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.656 20:51:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.656 20:51:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.656 20:51:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.656 20:51:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.656 20:51:48 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.656 20:51:48 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.656 20:51:48 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.656 20:51:48 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.656 20:51:48 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.656 20:51:48 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.656 20:51:48 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.656 20:51:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.656 20:51:48 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.656 20:51:48 -- scripts/common.sh@344 -- # : 1 00:05:27.656 20:51:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.656 20:51:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.656 20:51:48 -- scripts/common.sh@364 -- # decimal 1 00:05:27.656 20:51:48 -- scripts/common.sh@352 -- # local d=1 00:05:27.656 20:51:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.656 20:51:48 -- scripts/common.sh@354 -- # echo 1 00:05:27.656 20:51:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.656 20:51:48 -- scripts/common.sh@365 -- # decimal 2 00:05:27.656 20:51:48 -- scripts/common.sh@352 -- # local d=2 00:05:27.656 20:51:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.656 20:51:48 -- scripts/common.sh@354 -- # echo 2 00:05:27.656 20:51:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.656 20:51:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.656 20:51:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.656 20:51:48 -- scripts/common.sh@367 -- # return 0 00:05:27.656 20:51:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.656 20:51:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.656 --rc genhtml_branch_coverage=1 00:05:27.656 --rc genhtml_function_coverage=1 00:05:27.656 --rc genhtml_legend=1 00:05:27.656 --rc geninfo_all_blocks=1 00:05:27.656 --rc geninfo_unexecuted_blocks=1 00:05:27.656 00:05:27.656 ' 00:05:27.656 20:51:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.656 --rc genhtml_branch_coverage=1 00:05:27.656 --rc genhtml_function_coverage=1 00:05:27.656 --rc genhtml_legend=1 00:05:27.656 --rc geninfo_all_blocks=1 00:05:27.656 --rc geninfo_unexecuted_blocks=1 00:05:27.656 00:05:27.656 ' 00:05:27.656 20:51:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.656 --rc genhtml_branch_coverage=1 00:05:27.656 --rc genhtml_function_coverage=1 00:05:27.656 --rc genhtml_legend=1 00:05:27.656 --rc geninfo_all_blocks=1 00:05:27.656 --rc geninfo_unexecuted_blocks=1 00:05:27.656 00:05:27.656 ' 00:05:27.656 20:51:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.656 --rc genhtml_branch_coverage=1 00:05:27.656 --rc genhtml_function_coverage=1 00:05:27.656 --rc genhtml_legend=1 00:05:27.656 --rc geninfo_all_blocks=1 00:05:27.656 --rc geninfo_unexecuted_blocks=1 00:05:27.656 00:05:27.656 ' 00:05:27.656 20:51:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:27.656 20:51:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57119 00:05:27.656 20:51:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57119 00:05:27.656 20:51:48 -- common/autotest_common.sh@829 -- # '[' -z 57119 ']' 00:05:27.656 20:51:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.656 20:51:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.656 20:51:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.657 20:51:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.657 20:51:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.657 20:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:27.657 [2024-12-08 20:51:48.675397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:27.657 [2024-12-08 20:51:48.675577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57119 ] 00:05:27.966 [2024-12-08 20:51:48.846912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.245 [2024-12-08 20:51:48.992094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.245 [2024-12-08 20:51:48.992357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.827 20:51:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.827 20:51:49 -- common/autotest_common.sh@862 -- # return 0 00:05:28.827 20:51:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:28.827 20:51:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:28.827 20:51:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.827 20:51:49 -- common/autotest_common.sh@10 -- # set +x 00:05:28.827 { 00:05:28.827 "filename": "/tmp/spdk_mem_dump.txt" 00:05:28.827 } 00:05:28.827 20:51:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.827 20:51:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:28.827 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:28.827 1 heaps totaling size 820.000000 MiB 00:05:28.827 size: 820.000000 MiB heap id: 0 00:05:28.827 end heaps---------- 00:05:28.827 8 mempools totaling size 598.116089 MiB 00:05:28.827 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:28.827 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:28.827 size: 84.521057 MiB name: bdev_io_57119 00:05:28.827 size: 51.011292 MiB name: evtpool_57119 00:05:28.827 size: 50.003479 MiB name: msgpool_57119 00:05:28.827 size: 21.763794 MiB name: PDU_Pool 00:05:28.827 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:28.827 size: 0.026123 MiB name: Session_Pool 00:05:28.827 end mempools------- 00:05:28.827 6 memzones totaling size 4.142822 MiB 00:05:28.827 size: 1.000366 MiB name: RG_ring_0_57119 00:05:28.827 size: 1.000366 MiB name: RG_ring_1_57119 00:05:28.827 size: 1.000366 MiB name: RG_ring_4_57119 00:05:28.827 size: 1.000366 MiB name: RG_ring_5_57119 00:05:28.827 size: 0.125366 MiB name: RG_ring_2_57119 00:05:28.827 size: 0.015991 MiB name: RG_ring_3_57119 00:05:28.827 end memzones------- 00:05:28.827 20:51:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.827 heap id: 0 total size: 820.000000 MiB number of busy elements: 307 number of free elements: 18 00:05:28.827 list of free elements. size: 18.449829 MiB 00:05:28.827 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:28.827 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:28.827 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:28.827 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:28.827 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:28.827 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:28.827 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:28.827 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:28.827 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:28.827 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:28.827 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:28.827 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:28.827 element at address: 0x20001b000000 with size: 0.562927 MiB 00:05:28.827 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:28.827 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:28.827 element at address: 0x200013800000 with size: 0.468140 MiB 00:05:28.827 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:28.827 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:28.827 list of standard malloc elements. size: 199.285767 MiB 00:05:28.827 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:28.827 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:28.827 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:28.827 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:28.827 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:28.827 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:28.827 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:28.827 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:28.827 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:28.827 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:28.827 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:28.827 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:28.827 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:28.828 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:28.829 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:28.829 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:28.829 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:28.829 list of memzone associated elements. size: 602.264404 MiB 00:05:28.829 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:28.829 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:28.830 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:28.830 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:28.830 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:28.830 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_57119_0 00:05:28.830 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:28.830 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57119_0 00:05:28.830 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:28.830 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57119_0 00:05:28.830 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:28.830 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:28.830 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:28.830 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:28.830 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:28.830 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57119 00:05:28.830 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:28.830 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57119 00:05:28.830 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:28.830 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57119 00:05:28.830 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:28.830 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:28.830 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:28.830 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:28.830 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:28.830 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:28.830 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:28.830 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:28.830 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:28.830 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57119 00:05:28.830 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:28.830 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57119 00:05:28.830 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:28.830 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57119 00:05:28.830 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:28.830 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57119 00:05:28.830 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:28.830 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57119 00:05:28.830 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:28.830 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:28.830 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:28.830 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:28.830 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:28.830 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:28.830 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:28.830 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57119 00:05:28.830 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:28.830 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:28.830 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:28.830 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:28.830 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:28.830 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57119 00:05:28.830 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:28.830 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:28.830 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:28.830 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57119 00:05:28.830 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:28.830 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57119 00:05:28.830 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:28.830 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:28.830 20:51:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:28.830 20:51:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57119 00:05:28.830 20:51:49 -- common/autotest_common.sh@936 -- # '[' -z 57119 ']' 00:05:28.830 20:51:49 -- common/autotest_common.sh@940 -- # kill -0 57119 00:05:28.830 20:51:49 -- common/autotest_common.sh@941 -- # uname 00:05:28.830 20:51:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.830 20:51:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57119 00:05:28.830 20:51:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.830 killing process with pid 57119 00:05:28.830 20:51:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.830 20:51:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57119' 00:05:28.830 20:51:49 -- common/autotest_common.sh@955 -- # kill 57119 00:05:28.830 20:51:49 -- common/autotest_common.sh@960 -- # wait 57119 00:05:30.736 00:05:30.736 real 0m3.001s 00:05:30.736 user 0m3.170s 00:05:30.736 sys 0m0.426s 00:05:30.736 ************************************ 00:05:30.736 END TEST dpdk_mem_utility 00:05:30.736 ************************************ 00:05:30.736 20:51:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.736 20:51:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.736 20:51:51 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:30.736 20:51:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.736 20:51:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.736 20:51:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.736 ************************************ 00:05:30.736 START TEST event 00:05:30.736 ************************************ 00:05:30.736 20:51:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:30.736 * Looking for test storage... 00:05:30.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:30.736 20:51:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:30.736 20:51:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:30.736 20:51:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:30.736 20:51:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:30.736 20:51:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:30.736 20:51:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:30.736 20:51:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:30.736 20:51:51 -- scripts/common.sh@335 -- # IFS=.-: 00:05:30.736 20:51:51 -- scripts/common.sh@335 -- # read -ra ver1 00:05:30.736 20:51:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.736 20:51:51 -- scripts/common.sh@336 -- # read -ra ver2 00:05:30.736 20:51:51 -- scripts/common.sh@337 -- # local 'op=<' 00:05:30.736 20:51:51 -- scripts/common.sh@339 -- # ver1_l=2 00:05:30.736 20:51:51 -- scripts/common.sh@340 -- # ver2_l=1 00:05:30.736 20:51:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:30.736 20:51:51 -- scripts/common.sh@343 -- # case "$op" in 00:05:30.736 20:51:51 -- scripts/common.sh@344 -- # : 1 00:05:30.736 20:51:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:30.736 20:51:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.736 20:51:51 -- scripts/common.sh@364 -- # decimal 1 00:05:30.736 20:51:51 -- scripts/common.sh@352 -- # local d=1 00:05:30.736 20:51:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.736 20:51:51 -- scripts/common.sh@354 -- # echo 1 00:05:30.736 20:51:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:30.736 20:51:51 -- scripts/common.sh@365 -- # decimal 2 00:05:30.736 20:51:51 -- scripts/common.sh@352 -- # local d=2 00:05:30.736 20:51:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.736 20:51:51 -- scripts/common.sh@354 -- # echo 2 00:05:30.736 20:51:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:30.736 20:51:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:30.736 20:51:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:30.736 20:51:51 -- scripts/common.sh@367 -- # return 0 00:05:30.736 20:51:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.736 20:51:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:30.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.736 --rc genhtml_branch_coverage=1 00:05:30.736 --rc genhtml_function_coverage=1 00:05:30.736 --rc genhtml_legend=1 00:05:30.736 --rc geninfo_all_blocks=1 00:05:30.736 --rc geninfo_unexecuted_blocks=1 00:05:30.736 00:05:30.736 ' 00:05:30.736 20:51:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:30.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.736 --rc genhtml_branch_coverage=1 00:05:30.736 --rc genhtml_function_coverage=1 00:05:30.736 --rc genhtml_legend=1 00:05:30.736 --rc geninfo_all_blocks=1 00:05:30.736 --rc geninfo_unexecuted_blocks=1 00:05:30.736 00:05:30.736 ' 00:05:30.736 20:51:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:30.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.736 --rc genhtml_branch_coverage=1 00:05:30.736 --rc genhtml_function_coverage=1 00:05:30.736 --rc genhtml_legend=1 00:05:30.736 --rc geninfo_all_blocks=1 00:05:30.736 --rc geninfo_unexecuted_blocks=1 00:05:30.736 00:05:30.736 ' 00:05:30.736 20:51:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:30.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.736 --rc genhtml_branch_coverage=1 00:05:30.736 --rc genhtml_function_coverage=1 00:05:30.736 --rc genhtml_legend=1 00:05:30.736 --rc geninfo_all_blocks=1 00:05:30.736 --rc geninfo_unexecuted_blocks=1 00:05:30.736 00:05:30.736 ' 00:05:30.736 20:51:51 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:30.736 20:51:51 -- bdev/nbd_common.sh@6 -- # set -e 00:05:30.736 20:51:51 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.736 20:51:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:30.736 20:51:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.736 20:51:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.736 ************************************ 00:05:30.736 START TEST event_perf 00:05:30.736 ************************************ 00:05:30.736 20:51:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.736 Running I/O for 1 seconds...[2024-12-08 20:51:51.665970] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.736 [2024-12-08 20:51:51.666137] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57215 ] 00:05:30.995 [2024-12-08 20:51:51.836015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.995 [2024-12-08 20:51:51.984026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.995 [2024-12-08 20:51:51.984227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.995 [2024-12-08 20:51:51.984286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.995 [2024-12-08 20:51:51.984303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.375 Running I/O for 1 seconds... 00:05:32.375 lcore 0: 202612 00:05:32.375 lcore 1: 202610 00:05:32.375 lcore 2: 202612 00:05:32.375 lcore 3: 202610 00:05:32.375 done. 00:05:32.375 00:05:32.375 real 0m1.630s 00:05:32.375 user 0m4.406s 00:05:32.375 sys 0m0.104s 00:05:32.375 20:51:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.375 20:51:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.375 ************************************ 00:05:32.375 END TEST event_perf 00:05:32.375 ************************************ 00:05:32.375 20:51:53 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:32.375 20:51:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:32.375 20:51:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.375 20:51:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.375 ************************************ 00:05:32.375 START TEST event_reactor 00:05:32.375 ************************************ 00:05:32.375 20:51:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:32.375 [2024-12-08 20:51:53.348778] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.375 [2024-12-08 20:51:53.348932] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57255 ] 00:05:32.634 [2024-12-08 20:51:53.516662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.634 [2024-12-08 20:51:53.660283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.012 test_start 00:05:34.012 oneshot 00:05:34.012 tick 100 00:05:34.012 tick 100 00:05:34.012 tick 250 00:05:34.012 tick 100 00:05:34.012 tick 100 00:05:34.012 tick 250 00:05:34.012 tick 500 00:05:34.012 tick 100 00:05:34.012 tick 100 00:05:34.012 tick 100 00:05:34.012 tick 250 00:05:34.012 tick 100 00:05:34.012 tick 100 00:05:34.012 test_end 00:05:34.012 00:05:34.012 real 0m1.639s 00:05:34.012 user 0m1.415s 00:05:34.012 sys 0m0.115s 00:05:34.012 20:51:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.012 20:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.012 ************************************ 00:05:34.012 END TEST event_reactor 00:05:34.012 ************************************ 00:05:34.012 20:51:54 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.012 20:51:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:34.012 20:51:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.012 20:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.012 ************************************ 00:05:34.012 START TEST event_reactor_perf 00:05:34.012 ************************************ 00:05:34.012 20:51:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.012 [2024-12-08 20:51:55.042328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.012 [2024-12-08 20:51:55.042484] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57297 ] 00:05:34.271 [2024-12-08 20:51:55.210681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.529 [2024-12-08 20:51:55.355720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.912 test_start 00:05:35.912 test_end 00:05:35.912 Performance: 351483 events per second 00:05:35.912 00:05:35.912 real 0m1.636s 00:05:35.912 user 0m1.433s 00:05:35.912 sys 0m0.094s 00:05:35.912 20:51:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.912 20:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:35.912 ************************************ 00:05:35.912 END TEST event_reactor_perf 00:05:35.912 ************************************ 00:05:35.912 20:51:56 -- event/event.sh@49 -- # uname -s 00:05:35.912 20:51:56 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:35.912 20:51:56 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:35.912 20:51:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.912 20:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.912 20:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:35.912 ************************************ 00:05:35.912 START TEST event_scheduler 00:05:35.912 ************************************ 00:05:35.912 20:51:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:35.912 * Looking for test storage... 00:05:35.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:35.912 20:51:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:35.912 20:51:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:35.912 20:51:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:35.912 20:51:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:35.912 20:51:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:35.912 20:51:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:35.912 20:51:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:35.912 20:51:56 -- scripts/common.sh@335 -- # IFS=.-: 00:05:35.912 20:51:56 -- scripts/common.sh@335 -- # read -ra ver1 00:05:35.912 20:51:56 -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.912 20:51:56 -- scripts/common.sh@336 -- # read -ra ver2 00:05:35.912 20:51:56 -- scripts/common.sh@337 -- # local 'op=<' 00:05:35.912 20:51:56 -- scripts/common.sh@339 -- # ver1_l=2 00:05:35.912 20:51:56 -- scripts/common.sh@340 -- # ver2_l=1 00:05:35.912 20:51:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:35.912 20:51:56 -- scripts/common.sh@343 -- # case "$op" in 00:05:35.912 20:51:56 -- scripts/common.sh@344 -- # : 1 00:05:35.912 20:51:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:35.912 20:51:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.912 20:51:56 -- scripts/common.sh@364 -- # decimal 1 00:05:35.912 20:51:56 -- scripts/common.sh@352 -- # local d=1 00:05:35.912 20:51:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.912 20:51:56 -- scripts/common.sh@354 -- # echo 1 00:05:35.912 20:51:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:35.912 20:51:56 -- scripts/common.sh@365 -- # decimal 2 00:05:35.912 20:51:56 -- scripts/common.sh@352 -- # local d=2 00:05:35.912 20:51:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.912 20:51:56 -- scripts/common.sh@354 -- # echo 2 00:05:35.912 20:51:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:35.912 20:51:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:35.912 20:51:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:35.912 20:51:56 -- scripts/common.sh@367 -- # return 0 00:05:35.912 20:51:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.912 20:51:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:35.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.912 --rc genhtml_branch_coverage=1 00:05:35.912 --rc genhtml_function_coverage=1 00:05:35.912 --rc genhtml_legend=1 00:05:35.912 --rc geninfo_all_blocks=1 00:05:35.912 --rc geninfo_unexecuted_blocks=1 00:05:35.912 00:05:35.912 ' 00:05:35.912 20:51:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:35.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.912 --rc genhtml_branch_coverage=1 00:05:35.912 --rc genhtml_function_coverage=1 00:05:35.912 --rc genhtml_legend=1 00:05:35.912 --rc geninfo_all_blocks=1 00:05:35.912 --rc geninfo_unexecuted_blocks=1 00:05:35.912 00:05:35.912 ' 00:05:35.912 20:51:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:35.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.912 --rc genhtml_branch_coverage=1 00:05:35.912 --rc genhtml_function_coverage=1 00:05:35.912 --rc genhtml_legend=1 00:05:35.913 --rc geninfo_all_blocks=1 00:05:35.913 --rc geninfo_unexecuted_blocks=1 00:05:35.913 00:05:35.913 ' 00:05:35.913 20:51:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:35.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.913 --rc genhtml_branch_coverage=1 00:05:35.913 --rc genhtml_function_coverage=1 00:05:35.913 --rc genhtml_legend=1 00:05:35.913 --rc geninfo_all_blocks=1 00:05:35.913 --rc geninfo_unexecuted_blocks=1 00:05:35.913 00:05:35.913 ' 00:05:35.913 20:51:56 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:35.913 20:51:56 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57371 00:05:35.913 20:51:56 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.913 20:51:56 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:35.913 20:51:56 -- scheduler/scheduler.sh@37 -- # waitforlisten 57371 00:05:35.913 20:51:56 -- common/autotest_common.sh@829 -- # '[' -z 57371 ']' 00:05:35.913 20:51:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.913 20:51:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.913 20:51:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.913 20:51:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.913 20:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:36.173 [2024-12-08 20:51:56.967665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.173 [2024-12-08 20:51:56.967857] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57371 ] 00:05:36.173 [2024-12-08 20:51:57.140765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.432 [2024-12-08 20:51:57.367833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.432 [2024-12-08 20:51:57.367944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.432 [2024-12-08 20:51:57.368101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.432 [2024-12-08 20:51:57.368133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.000 20:51:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.000 20:51:57 -- common/autotest_common.sh@862 -- # return 0 00:05:37.000 20:51:57 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:37.000 20:51:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.000 20:51:57 -- common/autotest_common.sh@10 -- # set +x 00:05:37.000 POWER: Env isn't set yet! 00:05:37.000 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:37.000 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.000 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.000 POWER: Attempting to initialise PSTAT power management... 00:05:37.000 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.000 POWER: Cannot set governor of lcore 0 to performance 00:05:37.000 POWER: Attempting to initialise AMD PSTATE power management... 00:05:37.000 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.000 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.000 POWER: Attempting to initialise CPPC power management... 00:05:37.000 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.000 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.000 POWER: Attempting to initialise VM power management... 00:05:37.000 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:37.000 POWER: Unable to set Power Management Environment for lcore 0 00:05:37.000 [2024-12-08 20:51:57.925836] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:37.000 [2024-12-08 20:51:57.925856] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:37.000 [2024-12-08 20:51:57.925870] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:37.000 [2024-12-08 20:51:57.925890] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:37.000 [2024-12-08 20:51:57.925905] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:37.000 [2024-12-08 20:51:57.925916] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:37.000 20:51:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.000 20:51:57 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:37.000 20:51:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.000 20:51:57 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 [2024-12-08 20:51:58.142285] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:37.259 20:51:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.259 20:51:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 ************************************ 00:05:37.259 START TEST scheduler_create_thread 00:05:37.259 ************************************ 00:05:37.259 20:51:58 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 2 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 3 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 4 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 5 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 6 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 7 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 8 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 9 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 10 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.259 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 20:51:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.259 20:51:58 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:37.259 20:51:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.260 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:39.163 20:51:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.163 20:51:59 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.163 20:51:59 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.163 20:51:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.163 20:51:59 -- common/autotest_common.sh@10 -- # set +x 00:05:40.098 20:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.098 00:05:40.098 real 0m2.620s 00:05:40.098 user 0m0.023s 00:05:40.098 sys 0m0.002s 00:05:40.098 ************************************ 00:05:40.098 END TEST scheduler_create_thread 00:05:40.098 ************************************ 00:05:40.098 20:52:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.098 20:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:40.098 20:52:00 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:40.098 20:52:00 -- scheduler/scheduler.sh@46 -- # killprocess 57371 00:05:40.098 20:52:00 -- common/autotest_common.sh@936 -- # '[' -z 57371 ']' 00:05:40.098 20:52:00 -- common/autotest_common.sh@940 -- # kill -0 57371 00:05:40.098 20:52:00 -- common/autotest_common.sh@941 -- # uname 00:05:40.098 20:52:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.098 20:52:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57371 00:05:40.098 20:52:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:40.098 killing process with pid 57371 00:05:40.098 20:52:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:40.098 20:52:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57371' 00:05:40.098 20:52:00 -- common/autotest_common.sh@955 -- # kill 57371 00:05:40.098 20:52:00 -- common/autotest_common.sh@960 -- # wait 57371 00:05:40.357 [2024-12-08 20:52:01.255061] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:41.293 00:05:41.293 real 0m5.447s 00:05:41.293 user 0m9.397s 00:05:41.293 sys 0m0.410s 00:05:41.293 ************************************ 00:05:41.293 END TEST event_scheduler 00:05:41.293 ************************************ 00:05:41.293 20:52:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.293 20:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.293 20:52:02 -- event/event.sh@51 -- # modprobe -n nbd 00:05:41.293 20:52:02 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:41.293 20:52:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.293 20:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.293 20:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.293 ************************************ 00:05:41.293 START TEST app_repeat 00:05:41.293 ************************************ 00:05:41.293 20:52:02 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:41.293 20:52:02 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.293 20:52:02 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.293 20:52:02 -- event/event.sh@13 -- # local nbd_list 00:05:41.293 20:52:02 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.293 20:52:02 -- event/event.sh@14 -- # local bdev_list 00:05:41.293 20:52:02 -- event/event.sh@15 -- # local repeat_times=4 00:05:41.293 20:52:02 -- event/event.sh@17 -- # modprobe nbd 00:05:41.293 20:52:02 -- event/event.sh@19 -- # repeat_pid=57478 00:05:41.293 Process app_repeat pid: 57478 00:05:41.293 20:52:02 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.293 20:52:02 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57478' 00:05:41.293 spdk_app_start Round 0 00:05:41.293 20:52:02 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:41.293 20:52:02 -- event/event.sh@23 -- # for i in {0..2} 00:05:41.293 20:52:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:41.293 20:52:02 -- event/event.sh@25 -- # waitforlisten 57478 /var/tmp/spdk-nbd.sock 00:05:41.293 20:52:02 -- common/autotest_common.sh@829 -- # '[' -z 57478 ']' 00:05:41.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.293 20:52:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.293 20:52:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.293 20:52:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.293 20:52:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.293 20:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.293 [2024-12-08 20:52:02.251046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.293 [2024-12-08 20:52:02.251229] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57478 ] 00:05:41.552 [2024-12-08 20:52:02.413263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.552 [2024-12-08 20:52:02.558749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.552 [2024-12-08 20:52:02.558761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.489 20:52:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.489 20:52:03 -- common/autotest_common.sh@862 -- # return 0 00:05:42.489 20:52:03 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.748 Malloc0 00:05:42.748 20:52:03 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.007 Malloc1 00:05:43.007 20:52:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@12 -- # local i 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.007 20:52:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.266 /dev/nbd0 00:05:43.266 20:52:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.266 20:52:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.266 20:52:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:43.266 20:52:04 -- common/autotest_common.sh@867 -- # local i 00:05:43.266 20:52:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:43.266 20:52:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:43.266 20:52:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:43.266 20:52:04 -- common/autotest_common.sh@871 -- # break 00:05:43.266 20:52:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:43.266 20:52:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:43.266 20:52:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.266 1+0 records in 00:05:43.266 1+0 records out 00:05:43.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030482 s, 13.4 MB/s 00:05:43.266 20:52:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.266 20:52:04 -- common/autotest_common.sh@884 -- # size=4096 00:05:43.266 20:52:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.266 20:52:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:43.266 20:52:04 -- common/autotest_common.sh@887 -- # return 0 00:05:43.266 20:52:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.266 20:52:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.266 20:52:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.526 /dev/nbd1 00:05:43.526 20:52:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.526 20:52:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.526 20:52:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:43.526 20:52:04 -- common/autotest_common.sh@867 -- # local i 00:05:43.526 20:52:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:43.526 20:52:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:43.526 20:52:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:43.526 20:52:04 -- common/autotest_common.sh@871 -- # break 00:05:43.526 20:52:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:43.526 20:52:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:43.526 20:52:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.526 1+0 records in 00:05:43.526 1+0 records out 00:05:43.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310465 s, 13.2 MB/s 00:05:43.526 20:52:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.526 20:52:04 -- common/autotest_common.sh@884 -- # size=4096 00:05:43.526 20:52:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.526 20:52:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:43.526 20:52:04 -- common/autotest_common.sh@887 -- # return 0 00:05:43.526 20:52:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.526 20:52:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.526 20:52:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.526 20:52:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.526 20:52:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.526 20:52:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.526 { 00:05:43.526 "nbd_device": "/dev/nbd0", 00:05:43.526 "bdev_name": "Malloc0" 00:05:43.526 }, 00:05:43.526 { 00:05:43.526 "nbd_device": "/dev/nbd1", 00:05:43.526 "bdev_name": "Malloc1" 00:05:43.526 } 00:05:43.526 ]' 00:05:43.526 20:52:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.526 20:52:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.526 { 00:05:43.526 "nbd_device": "/dev/nbd0", 00:05:43.526 "bdev_name": "Malloc0" 00:05:43.526 }, 00:05:43.526 { 00:05:43.526 "nbd_device": "/dev/nbd1", 00:05:43.526 "bdev_name": "Malloc1" 00:05:43.526 } 00:05:43.526 ]' 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.786 /dev/nbd1' 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.786 /dev/nbd1' 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.786 256+0 records in 00:05:43.786 256+0 records out 00:05:43.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00757554 s, 138 MB/s 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.786 256+0 records in 00:05:43.786 256+0 records out 00:05:43.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249845 s, 42.0 MB/s 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.786 256+0 records in 00:05:43.786 256+0 records out 00:05:43.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0338614 s, 31.0 MB/s 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@51 -- # local i 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.786 20:52:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.045 20:52:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.045 20:52:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.045 20:52:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.045 20:52:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.045 20:52:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.045 20:52:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.045 20:52:04 -- bdev/nbd_common.sh@41 -- # break 00:05:44.045 20:52:04 -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.045 20:52:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.045 20:52:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@41 -- # break 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.303 20:52:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@65 -- # true 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.560 20:52:05 -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.560 20:52:05 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.128 20:52:05 -- event/event.sh@35 -- # sleep 3 00:05:46.065 [2024-12-08 20:52:06.791162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.065 [2024-12-08 20:52:06.925522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.065 [2024-12-08 20:52:06.925525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.065 [2024-12-08 20:52:07.064337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.065 [2024-12-08 20:52:07.064389] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.969 spdk_app_start Round 1 00:05:47.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.969 20:52:08 -- event/event.sh@23 -- # for i in {0..2} 00:05:47.969 20:52:08 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.969 20:52:08 -- event/event.sh@25 -- # waitforlisten 57478 /var/tmp/spdk-nbd.sock 00:05:47.969 20:52:08 -- common/autotest_common.sh@829 -- # '[' -z 57478 ']' 00:05:47.969 20:52:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.969 20:52:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.969 20:52:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.969 20:52:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.970 20:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:48.229 20:52:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.229 20:52:09 -- common/autotest_common.sh@862 -- # return 0 00:05:48.229 20:52:09 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.488 Malloc0 00:05:48.488 20:52:09 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.747 Malloc1 00:05:48.748 20:52:09 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@12 -- # local i 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.748 20:52:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.007 /dev/nbd0 00:05:49.007 20:52:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.007 20:52:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.007 20:52:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:49.007 20:52:09 -- common/autotest_common.sh@867 -- # local i 00:05:49.007 20:52:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.007 20:52:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.007 20:52:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:49.007 20:52:09 -- common/autotest_common.sh@871 -- # break 00:05:49.007 20:52:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.007 20:52:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.007 20:52:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.007 1+0 records in 00:05:49.007 1+0 records out 00:05:49.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021055 s, 19.5 MB/s 00:05:49.007 20:52:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.007 20:52:09 -- common/autotest_common.sh@884 -- # size=4096 00:05:49.007 20:52:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.007 20:52:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.007 20:52:09 -- common/autotest_common.sh@887 -- # return 0 00:05:49.007 20:52:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.007 20:52:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.007 20:52:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.266 /dev/nbd1 00:05:49.266 20:52:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.266 20:52:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.266 20:52:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:49.266 20:52:10 -- common/autotest_common.sh@867 -- # local i 00:05:49.266 20:52:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.266 20:52:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.266 20:52:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:49.266 20:52:10 -- common/autotest_common.sh@871 -- # break 00:05:49.266 20:52:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.266 20:52:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.266 20:52:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.266 1+0 records in 00:05:49.266 1+0 records out 00:05:49.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571521 s, 7.2 MB/s 00:05:49.266 20:52:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.266 20:52:10 -- common/autotest_common.sh@884 -- # size=4096 00:05:49.266 20:52:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.266 20:52:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.266 20:52:10 -- common/autotest_common.sh@887 -- # return 0 00:05:49.266 20:52:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.266 20:52:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.266 20:52:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.266 20:52:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.266 20:52:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.835 { 00:05:49.835 "nbd_device": "/dev/nbd0", 00:05:49.835 "bdev_name": "Malloc0" 00:05:49.835 }, 00:05:49.835 { 00:05:49.835 "nbd_device": "/dev/nbd1", 00:05:49.835 "bdev_name": "Malloc1" 00:05:49.835 } 00:05:49.835 ]' 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.835 { 00:05:49.835 "nbd_device": "/dev/nbd0", 00:05:49.835 "bdev_name": "Malloc0" 00:05:49.835 }, 00:05:49.835 { 00:05:49.835 "nbd_device": "/dev/nbd1", 00:05:49.835 "bdev_name": "Malloc1" 00:05:49.835 } 00:05:49.835 ]' 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.835 /dev/nbd1' 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.835 /dev/nbd1' 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.835 256+0 records in 00:05:49.835 256+0 records out 00:05:49.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00838453 s, 125 MB/s 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.835 256+0 records in 00:05:49.835 256+0 records out 00:05:49.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252406 s, 41.5 MB/s 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.835 256+0 records in 00:05:49.835 256+0 records out 00:05:49.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296443 s, 35.4 MB/s 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@51 -- # local i 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.835 20:52:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.094 20:52:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.094 20:52:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.094 20:52:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.094 20:52:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.094 20:52:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.094 20:52:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.094 20:52:11 -- bdev/nbd_common.sh@41 -- # break 00:05:50.094 20:52:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.094 20:52:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.094 20:52:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@41 -- # break 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.354 20:52:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@65 -- # true 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.613 20:52:11 -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.613 20:52:11 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.872 20:52:11 -- event/event.sh@35 -- # sleep 3 00:05:51.815 [2024-12-08 20:52:12.785652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.074 [2024-12-08 20:52:12.921481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.074 [2024-12-08 20:52:12.921487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.074 [2024-12-08 20:52:13.051895] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.074 [2024-12-08 20:52:13.051991] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.978 spdk_app_start Round 2 00:05:53.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.978 20:52:14 -- event/event.sh@23 -- # for i in {0..2} 00:05:53.978 20:52:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:53.978 20:52:14 -- event/event.sh@25 -- # waitforlisten 57478 /var/tmp/spdk-nbd.sock 00:05:53.978 20:52:14 -- common/autotest_common.sh@829 -- # '[' -z 57478 ']' 00:05:53.978 20:52:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.978 20:52:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.978 20:52:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.978 20:52:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.978 20:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:54.236 20:52:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.236 20:52:15 -- common/autotest_common.sh@862 -- # return 0 00:05:54.237 20:52:15 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.496 Malloc0 00:05:54.496 20:52:15 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.754 Malloc1 00:05:54.754 20:52:15 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@12 -- # local i 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.754 20:52:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.013 /dev/nbd0 00:05:55.013 20:52:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.013 20:52:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.013 20:52:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:55.013 20:52:15 -- common/autotest_common.sh@867 -- # local i 00:05:55.013 20:52:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.013 20:52:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.013 20:52:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:55.013 20:52:15 -- common/autotest_common.sh@871 -- # break 00:05:55.013 20:52:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.013 20:52:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.013 20:52:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.013 1+0 records in 00:05:55.013 1+0 records out 00:05:55.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038512 s, 10.6 MB/s 00:05:55.013 20:52:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.013 20:52:16 -- common/autotest_common.sh@884 -- # size=4096 00:05:55.013 20:52:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.013 20:52:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.013 20:52:16 -- common/autotest_common.sh@887 -- # return 0 00:05:55.013 20:52:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.013 20:52:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.013 20:52:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.271 /dev/nbd1 00:05:55.271 20:52:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.271 20:52:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.271 20:52:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:55.271 20:52:16 -- common/autotest_common.sh@867 -- # local i 00:05:55.271 20:52:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.271 20:52:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.271 20:52:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:55.271 20:52:16 -- common/autotest_common.sh@871 -- # break 00:05:55.271 20:52:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.271 20:52:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.271 20:52:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.271 1+0 records in 00:05:55.271 1+0 records out 00:05:55.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356857 s, 11.5 MB/s 00:05:55.271 20:52:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.271 20:52:16 -- common/autotest_common.sh@884 -- # size=4096 00:05:55.271 20:52:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.271 20:52:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.271 20:52:16 -- common/autotest_common.sh@887 -- # return 0 00:05:55.271 20:52:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.271 20:52:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.271 20:52:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.271 20:52:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.272 20:52:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.529 20:52:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.529 { 00:05:55.529 "nbd_device": "/dev/nbd0", 00:05:55.529 "bdev_name": "Malloc0" 00:05:55.529 }, 00:05:55.529 { 00:05:55.529 "nbd_device": "/dev/nbd1", 00:05:55.529 "bdev_name": "Malloc1" 00:05:55.529 } 00:05:55.529 ]' 00:05:55.529 20:52:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.529 { 00:05:55.529 "nbd_device": "/dev/nbd0", 00:05:55.529 "bdev_name": "Malloc0" 00:05:55.529 }, 00:05:55.529 { 00:05:55.529 "nbd_device": "/dev/nbd1", 00:05:55.529 "bdev_name": "Malloc1" 00:05:55.529 } 00:05:55.529 ]' 00:05:55.529 20:52:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.787 /dev/nbd1' 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.787 /dev/nbd1' 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.787 256+0 records in 00:05:55.787 256+0 records out 00:05:55.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00663554 s, 158 MB/s 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.787 256+0 records in 00:05:55.787 256+0 records out 00:05:55.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226084 s, 46.4 MB/s 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.787 256+0 records in 00:05:55.787 256+0 records out 00:05:55.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0363513 s, 28.8 MB/s 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@51 -- # local i 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.787 20:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.044 20:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.044 20:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.044 20:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.044 20:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.044 20:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.044 20:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.044 20:52:16 -- bdev/nbd_common.sh@41 -- # break 00:05:56.044 20:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.044 20:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.044 20:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@41 -- # break 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.301 20:52:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@65 -- # true 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.558 20:52:17 -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.558 20:52:17 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.829 20:52:17 -- event/event.sh@35 -- # sleep 3 00:05:57.796 [2024-12-08 20:52:18.768747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.060 [2024-12-08 20:52:18.908347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.060 [2024-12-08 20:52:18.908351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.060 [2024-12-08 20:52:19.039171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.060 [2024-12-08 20:52:19.039261] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.966 20:52:20 -- event/event.sh@38 -- # waitforlisten 57478 /var/tmp/spdk-nbd.sock 00:05:59.966 20:52:20 -- common/autotest_common.sh@829 -- # '[' -z 57478 ']' 00:05:59.966 20:52:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.966 20:52:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.966 20:52:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.966 20:52:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.966 20:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:00.224 20:52:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.224 20:52:21 -- common/autotest_common.sh@862 -- # return 0 00:06:00.224 20:52:21 -- event/event.sh@39 -- # killprocess 57478 00:06:00.224 20:52:21 -- common/autotest_common.sh@936 -- # '[' -z 57478 ']' 00:06:00.224 20:52:21 -- common/autotest_common.sh@940 -- # kill -0 57478 00:06:00.224 20:52:21 -- common/autotest_common.sh@941 -- # uname 00:06:00.224 20:52:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.224 20:52:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57478 00:06:00.224 killing process with pid 57478 00:06:00.224 20:52:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.224 20:52:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.224 20:52:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57478' 00:06:00.224 20:52:21 -- common/autotest_common.sh@955 -- # kill 57478 00:06:00.224 20:52:21 -- common/autotest_common.sh@960 -- # wait 57478 00:06:01.161 spdk_app_start is called in Round 0. 00:06:01.161 Shutdown signal received, stop current app iteration 00:06:01.161 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:01.161 spdk_app_start is called in Round 1. 00:06:01.161 Shutdown signal received, stop current app iteration 00:06:01.161 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:01.161 spdk_app_start is called in Round 2. 00:06:01.161 Shutdown signal received, stop current app iteration 00:06:01.161 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:01.161 spdk_app_start is called in Round 3. 00:06:01.161 Shutdown signal received, stop current app iteration 00:06:01.162 ************************************ 00:06:01.162 END TEST app_repeat 00:06:01.162 ************************************ 00:06:01.162 20:52:21 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:01.162 20:52:21 -- event/event.sh@42 -- # return 0 00:06:01.162 00:06:01.162 real 0m19.759s 00:06:01.162 user 0m43.171s 00:06:01.162 sys 0m2.443s 00:06:01.162 20:52:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.162 20:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.162 20:52:21 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:01.162 20:52:21 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:01.162 20:52:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.162 20:52:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.162 20:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.162 ************************************ 00:06:01.162 START TEST cpu_locks 00:06:01.162 ************************************ 00:06:01.162 20:52:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:01.162 * Looking for test storage... 00:06:01.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:01.162 20:52:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:01.162 20:52:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:01.162 20:52:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:01.162 20:52:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:01.162 20:52:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:01.162 20:52:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:01.162 20:52:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:01.162 20:52:22 -- scripts/common.sh@335 -- # IFS=.-: 00:06:01.162 20:52:22 -- scripts/common.sh@335 -- # read -ra ver1 00:06:01.162 20:52:22 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.162 20:52:22 -- scripts/common.sh@336 -- # read -ra ver2 00:06:01.162 20:52:22 -- scripts/common.sh@337 -- # local 'op=<' 00:06:01.162 20:52:22 -- scripts/common.sh@339 -- # ver1_l=2 00:06:01.162 20:52:22 -- scripts/common.sh@340 -- # ver2_l=1 00:06:01.162 20:52:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:01.162 20:52:22 -- scripts/common.sh@343 -- # case "$op" in 00:06:01.162 20:52:22 -- scripts/common.sh@344 -- # : 1 00:06:01.162 20:52:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:01.162 20:52:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.162 20:52:22 -- scripts/common.sh@364 -- # decimal 1 00:06:01.162 20:52:22 -- scripts/common.sh@352 -- # local d=1 00:06:01.162 20:52:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.162 20:52:22 -- scripts/common.sh@354 -- # echo 1 00:06:01.162 20:52:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:01.162 20:52:22 -- scripts/common.sh@365 -- # decimal 2 00:06:01.162 20:52:22 -- scripts/common.sh@352 -- # local d=2 00:06:01.162 20:52:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.162 20:52:22 -- scripts/common.sh@354 -- # echo 2 00:06:01.162 20:52:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:01.162 20:52:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:01.162 20:52:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:01.162 20:52:22 -- scripts/common.sh@367 -- # return 0 00:06:01.162 20:52:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.162 20:52:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:01.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.162 --rc genhtml_branch_coverage=1 00:06:01.162 --rc genhtml_function_coverage=1 00:06:01.162 --rc genhtml_legend=1 00:06:01.162 --rc geninfo_all_blocks=1 00:06:01.162 --rc geninfo_unexecuted_blocks=1 00:06:01.162 00:06:01.162 ' 00:06:01.162 20:52:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:01.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.162 --rc genhtml_branch_coverage=1 00:06:01.162 --rc genhtml_function_coverage=1 00:06:01.162 --rc genhtml_legend=1 00:06:01.162 --rc geninfo_all_blocks=1 00:06:01.162 --rc geninfo_unexecuted_blocks=1 00:06:01.162 00:06:01.162 ' 00:06:01.162 20:52:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:01.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.162 --rc genhtml_branch_coverage=1 00:06:01.162 --rc genhtml_function_coverage=1 00:06:01.162 --rc genhtml_legend=1 00:06:01.162 --rc geninfo_all_blocks=1 00:06:01.162 --rc geninfo_unexecuted_blocks=1 00:06:01.162 00:06:01.162 ' 00:06:01.162 20:52:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:01.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.162 --rc genhtml_branch_coverage=1 00:06:01.162 --rc genhtml_function_coverage=1 00:06:01.162 --rc genhtml_legend=1 00:06:01.162 --rc geninfo_all_blocks=1 00:06:01.162 --rc geninfo_unexecuted_blocks=1 00:06:01.162 00:06:01.162 ' 00:06:01.162 20:52:22 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:01.162 20:52:22 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:01.162 20:52:22 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:01.162 20:52:22 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:01.162 20:52:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.162 20:52:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.162 20:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:01.422 ************************************ 00:06:01.422 START TEST default_locks 00:06:01.422 ************************************ 00:06:01.422 20:52:22 -- common/autotest_common.sh@1114 -- # default_locks 00:06:01.422 20:52:22 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57924 00:06:01.422 20:52:22 -- event/cpu_locks.sh@47 -- # waitforlisten 57924 00:06:01.422 20:52:22 -- common/autotest_common.sh@829 -- # '[' -z 57924 ']' 00:06:01.422 20:52:22 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.422 20:52:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.422 20:52:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.422 20:52:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.422 20:52:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.422 20:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:01.422 [2024-12-08 20:52:22.296487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.422 [2024-12-08 20:52:22.296665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57924 ] 00:06:01.422 [2024-12-08 20:52:22.451244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.682 [2024-12-08 20:52:22.594856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.682 [2024-12-08 20:52:22.595070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.251 20:52:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.251 20:52:23 -- common/autotest_common.sh@862 -- # return 0 00:06:02.251 20:52:23 -- event/cpu_locks.sh@49 -- # locks_exist 57924 00:06:02.251 20:52:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.251 20:52:23 -- event/cpu_locks.sh@22 -- # lslocks -p 57924 00:06:02.819 20:52:23 -- event/cpu_locks.sh@50 -- # killprocess 57924 00:06:02.819 20:52:23 -- common/autotest_common.sh@936 -- # '[' -z 57924 ']' 00:06:02.819 20:52:23 -- common/autotest_common.sh@940 -- # kill -0 57924 00:06:02.819 20:52:23 -- common/autotest_common.sh@941 -- # uname 00:06:02.819 20:52:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.819 20:52:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57924 00:06:02.819 20:52:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.819 20:52:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.819 killing process with pid 57924 00:06:02.819 20:52:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57924' 00:06:02.819 20:52:23 -- common/autotest_common.sh@955 -- # kill 57924 00:06:02.819 20:52:23 -- common/autotest_common.sh@960 -- # wait 57924 00:06:04.726 20:52:25 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57924 00:06:04.726 20:52:25 -- common/autotest_common.sh@650 -- # local es=0 00:06:04.726 20:52:25 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57924 00:06:04.726 20:52:25 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.726 20:52:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.726 20:52:25 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.726 20:52:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.726 20:52:25 -- common/autotest_common.sh@653 -- # waitforlisten 57924 00:06:04.726 20:52:25 -- common/autotest_common.sh@829 -- # '[' -z 57924 ']' 00:06:04.726 20:52:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.726 20:52:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.726 20:52:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.726 20:52:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.726 20:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:04.726 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57924) - No such process 00:06:04.726 ERROR: process (pid: 57924) is no longer running 00:06:04.726 20:52:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.726 20:52:25 -- common/autotest_common.sh@862 -- # return 1 00:06:04.726 20:52:25 -- common/autotest_common.sh@653 -- # es=1 00:06:04.726 20:52:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.726 20:52:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.726 20:52:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.726 20:52:25 -- event/cpu_locks.sh@54 -- # no_locks 00:06:04.726 20:52:25 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.726 20:52:25 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.726 20:52:25 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.726 00:06:04.726 real 0m3.057s 00:06:04.726 user 0m3.249s 00:06:04.726 sys 0m0.512s 00:06:04.726 20:52:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.726 20:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:04.726 ************************************ 00:06:04.726 END TEST default_locks 00:06:04.726 ************************************ 00:06:04.726 20:52:25 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:04.726 20:52:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.727 20:52:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.727 20:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:04.727 ************************************ 00:06:04.727 START TEST default_locks_via_rpc 00:06:04.727 ************************************ 00:06:04.727 20:52:25 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:04.727 20:52:25 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57988 00:06:04.727 20:52:25 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.727 20:52:25 -- event/cpu_locks.sh@63 -- # waitforlisten 57988 00:06:04.727 20:52:25 -- common/autotest_common.sh@829 -- # '[' -z 57988 ']' 00:06:04.727 20:52:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.727 20:52:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.727 20:52:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.727 20:52:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.727 20:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:04.727 [2024-12-08 20:52:25.432762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.727 [2024-12-08 20:52:25.432911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57988 ] 00:06:04.727 [2024-12-08 20:52:25.598594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.727 [2024-12-08 20:52:25.738737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.727 [2024-12-08 20:52:25.738957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.106 20:52:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.106 20:52:27 -- common/autotest_common.sh@862 -- # return 0 00:06:06.106 20:52:27 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:06.106 20:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.106 20:52:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.106 20:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.106 20:52:27 -- event/cpu_locks.sh@67 -- # no_locks 00:06:06.106 20:52:27 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.106 20:52:27 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.106 20:52:27 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.106 20:52:27 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.106 20:52:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.106 20:52:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.106 20:52:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.106 20:52:27 -- event/cpu_locks.sh@71 -- # locks_exist 57988 00:06:06.106 20:52:27 -- event/cpu_locks.sh@22 -- # lslocks -p 57988 00:06:06.106 20:52:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.365 20:52:27 -- event/cpu_locks.sh@73 -- # killprocess 57988 00:06:06.365 20:52:27 -- common/autotest_common.sh@936 -- # '[' -z 57988 ']' 00:06:06.365 20:52:27 -- common/autotest_common.sh@940 -- # kill -0 57988 00:06:06.365 20:52:27 -- common/autotest_common.sh@941 -- # uname 00:06:06.365 20:52:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.366 20:52:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57988 00:06:06.366 20:52:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:06.366 20:52:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:06.366 killing process with pid 57988 00:06:06.366 20:52:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57988' 00:06:06.366 20:52:27 -- common/autotest_common.sh@955 -- # kill 57988 00:06:06.366 20:52:27 -- common/autotest_common.sh@960 -- # wait 57988 00:06:08.272 00:06:08.272 real 0m3.623s 00:06:08.272 user 0m3.869s 00:06:08.272 sys 0m0.539s 00:06:08.272 20:52:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.272 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:08.272 ************************************ 00:06:08.272 END TEST default_locks_via_rpc 00:06:08.272 ************************************ 00:06:08.272 20:52:28 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:08.272 20:52:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.272 20:52:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.272 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:08.272 ************************************ 00:06:08.272 START TEST non_locking_app_on_locked_coremask 00:06:08.272 ************************************ 00:06:08.272 20:52:28 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:08.272 20:52:28 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58064 00:06:08.272 20:52:28 -- event/cpu_locks.sh@81 -- # waitforlisten 58064 /var/tmp/spdk.sock 00:06:08.272 20:52:28 -- common/autotest_common.sh@829 -- # '[' -z 58064 ']' 00:06:08.272 20:52:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.272 20:52:28 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.272 20:52:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.272 20:52:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.272 20:52:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.272 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:08.272 [2024-12-08 20:52:29.103726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.272 [2024-12-08 20:52:29.103902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58064 ] 00:06:08.272 [2024-12-08 20:52:29.276429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.531 [2024-12-08 20:52:29.417907] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.531 [2024-12-08 20:52:29.418146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.101 20:52:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.101 20:52:29 -- common/autotest_common.sh@862 -- # return 0 00:06:09.101 20:52:29 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58075 00:06:09.101 20:52:29 -- event/cpu_locks.sh@85 -- # waitforlisten 58075 /var/tmp/spdk2.sock 00:06:09.101 20:52:29 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.101 20:52:29 -- common/autotest_common.sh@829 -- # '[' -z 58075 ']' 00:06:09.101 20:52:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.101 20:52:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.101 20:52:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.101 20:52:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.101 20:52:29 -- common/autotest_common.sh@10 -- # set +x 00:06:09.101 [2024-12-08 20:52:30.097340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.101 [2024-12-08 20:52:30.097540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58075 ] 00:06:09.360 [2024-12-08 20:52:30.267972] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.360 [2024-12-08 20:52:30.268025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.619 [2024-12-08 20:52:30.557925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.619 [2024-12-08 20:52:30.558154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.999 20:52:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.999 20:52:31 -- common/autotest_common.sh@862 -- # return 0 00:06:10.999 20:52:31 -- event/cpu_locks.sh@87 -- # locks_exist 58064 00:06:10.999 20:52:31 -- event/cpu_locks.sh@22 -- # lslocks -p 58064 00:06:10.999 20:52:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.567 20:52:32 -- event/cpu_locks.sh@89 -- # killprocess 58064 00:06:11.567 20:52:32 -- common/autotest_common.sh@936 -- # '[' -z 58064 ']' 00:06:11.567 20:52:32 -- common/autotest_common.sh@940 -- # kill -0 58064 00:06:11.567 20:52:32 -- common/autotest_common.sh@941 -- # uname 00:06:11.567 20:52:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.567 20:52:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58064 00:06:11.567 killing process with pid 58064 00:06:11.567 20:52:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.567 20:52:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.567 20:52:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58064' 00:06:11.567 20:52:32 -- common/autotest_common.sh@955 -- # kill 58064 00:06:11.567 20:52:32 -- common/autotest_common.sh@960 -- # wait 58064 00:06:14.854 20:52:35 -- event/cpu_locks.sh@90 -- # killprocess 58075 00:06:14.854 20:52:35 -- common/autotest_common.sh@936 -- # '[' -z 58075 ']' 00:06:14.854 20:52:35 -- common/autotest_common.sh@940 -- # kill -0 58075 00:06:14.854 20:52:35 -- common/autotest_common.sh@941 -- # uname 00:06:14.854 20:52:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.854 20:52:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58075 00:06:14.854 20:52:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.854 20:52:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.854 killing process with pid 58075 00:06:14.854 20:52:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58075' 00:06:14.854 20:52:35 -- common/autotest_common.sh@955 -- # kill 58075 00:06:14.854 20:52:35 -- common/autotest_common.sh@960 -- # wait 58075 00:06:16.759 00:06:16.759 real 0m8.391s 00:06:16.759 user 0m9.014s 00:06:16.759 sys 0m1.177s 00:06:16.759 20:52:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.759 20:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:16.759 ************************************ 00:06:16.759 END TEST non_locking_app_on_locked_coremask 00:06:16.759 ************************************ 00:06:16.759 20:52:37 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.759 20:52:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.759 20:52:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.759 20:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:16.759 ************************************ 00:06:16.759 START TEST locking_app_on_unlocked_coremask 00:06:16.759 ************************************ 00:06:16.759 20:52:37 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:16.759 20:52:37 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58195 00:06:16.759 20:52:37 -- event/cpu_locks.sh@99 -- # waitforlisten 58195 /var/tmp/spdk.sock 00:06:16.759 20:52:37 -- common/autotest_common.sh@829 -- # '[' -z 58195 ']' 00:06:16.759 20:52:37 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.759 20:52:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.759 20:52:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.759 20:52:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.759 20:52:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.759 20:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:16.759 [2024-12-08 20:52:37.550291] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.759 [2024-12-08 20:52:37.550460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58195 ] 00:06:16.759 [2024-12-08 20:52:37.719448] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.759 [2024-12-08 20:52:37.719487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.018 [2024-12-08 20:52:37.860310] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.018 [2024-12-08 20:52:37.860524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.587 20:52:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.587 20:52:38 -- common/autotest_common.sh@862 -- # return 0 00:06:17.587 20:52:38 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58210 00:06:17.587 20:52:38 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.587 20:52:38 -- event/cpu_locks.sh@103 -- # waitforlisten 58210 /var/tmp/spdk2.sock 00:06:17.587 20:52:38 -- common/autotest_common.sh@829 -- # '[' -z 58210 ']' 00:06:17.587 20:52:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.587 20:52:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.587 20:52:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.587 20:52:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.587 20:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:17.587 [2024-12-08 20:52:38.525342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.587 [2024-12-08 20:52:38.525525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58210 ] 00:06:17.846 [2024-12-08 20:52:38.697294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.106 [2024-12-08 20:52:38.988167] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.106 [2024-12-08 20:52:38.988405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.485 20:52:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.485 20:52:40 -- common/autotest_common.sh@862 -- # return 0 00:06:19.485 20:52:40 -- event/cpu_locks.sh@105 -- # locks_exist 58210 00:06:19.485 20:52:40 -- event/cpu_locks.sh@22 -- # lslocks -p 58210 00:06:19.485 20:52:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.054 20:52:41 -- event/cpu_locks.sh@107 -- # killprocess 58195 00:06:20.054 20:52:41 -- common/autotest_common.sh@936 -- # '[' -z 58195 ']' 00:06:20.054 20:52:41 -- common/autotest_common.sh@940 -- # kill -0 58195 00:06:20.054 20:52:41 -- common/autotest_common.sh@941 -- # uname 00:06:20.054 20:52:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.054 20:52:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58195 00:06:20.054 20:52:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.054 20:52:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.054 killing process with pid 58195 00:06:20.054 20:52:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58195' 00:06:20.054 20:52:41 -- common/autotest_common.sh@955 -- # kill 58195 00:06:20.054 20:52:41 -- common/autotest_common.sh@960 -- # wait 58195 00:06:23.346 20:52:44 -- event/cpu_locks.sh@108 -- # killprocess 58210 00:06:23.346 20:52:44 -- common/autotest_common.sh@936 -- # '[' -z 58210 ']' 00:06:23.346 20:52:44 -- common/autotest_common.sh@940 -- # kill -0 58210 00:06:23.346 20:52:44 -- common/autotest_common.sh@941 -- # uname 00:06:23.346 20:52:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.346 20:52:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58210 00:06:23.346 killing process with pid 58210 00:06:23.346 20:52:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.346 20:52:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.346 20:52:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58210' 00:06:23.346 20:52:44 -- common/autotest_common.sh@955 -- # kill 58210 00:06:23.346 20:52:44 -- common/autotest_common.sh@960 -- # wait 58210 00:06:25.256 ************************************ 00:06:25.256 END TEST locking_app_on_unlocked_coremask 00:06:25.256 ************************************ 00:06:25.256 00:06:25.256 real 0m8.461s 00:06:25.256 user 0m9.169s 00:06:25.256 sys 0m1.150s 00:06:25.256 20:52:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.256 20:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.256 20:52:45 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:25.256 20:52:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.256 20:52:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.256 20:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.256 ************************************ 00:06:25.256 START TEST locking_app_on_locked_coremask 00:06:25.256 ************************************ 00:06:25.256 20:52:45 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:25.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.256 20:52:45 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58332 00:06:25.256 20:52:45 -- event/cpu_locks.sh@116 -- # waitforlisten 58332 /var/tmp/spdk.sock 00:06:25.256 20:52:45 -- common/autotest_common.sh@829 -- # '[' -z 58332 ']' 00:06:25.256 20:52:45 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.256 20:52:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.256 20:52:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.256 20:52:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.256 20:52:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.256 20:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.256 [2024-12-08 20:52:46.031898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.256 [2024-12-08 20:52:46.032365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58332 ] 00:06:25.256 [2024-12-08 20:52:46.183174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.514 [2024-12-08 20:52:46.327898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.515 [2024-12-08 20:52:46.328436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.081 20:52:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.081 20:52:46 -- common/autotest_common.sh@862 -- # return 0 00:06:26.081 20:52:46 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58348 00:06:26.081 20:52:46 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.081 20:52:46 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58348 /var/tmp/spdk2.sock 00:06:26.081 20:52:46 -- common/autotest_common.sh@650 -- # local es=0 00:06:26.081 20:52:46 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58348 /var/tmp/spdk2.sock 00:06:26.081 20:52:46 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:26.081 20:52:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.081 20:52:46 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:26.081 20:52:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.081 20:52:46 -- common/autotest_common.sh@653 -- # waitforlisten 58348 /var/tmp/spdk2.sock 00:06:26.081 20:52:46 -- common/autotest_common.sh@829 -- # '[' -z 58348 ']' 00:06:26.081 20:52:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.081 20:52:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.082 20:52:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.082 20:52:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.082 20:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:26.082 [2024-12-08 20:52:47.093781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.082 [2024-12-08 20:52:47.094742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58348 ] 00:06:26.340 [2024-12-08 20:52:47.270003] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58332 has claimed it. 00:06:26.340 [2024-12-08 20:52:47.270059] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.907 ERROR: process (pid: 58348) is no longer running 00:06:26.907 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (58348) - No such process 00:06:26.907 20:52:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.907 20:52:47 -- common/autotest_common.sh@862 -- # return 1 00:06:26.907 20:52:47 -- common/autotest_common.sh@653 -- # es=1 00:06:26.907 20:52:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.907 20:52:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.907 20:52:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.907 20:52:47 -- event/cpu_locks.sh@122 -- # locks_exist 58332 00:06:26.907 20:52:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.907 20:52:47 -- event/cpu_locks.sh@22 -- # lslocks -p 58332 00:06:27.166 20:52:48 -- event/cpu_locks.sh@124 -- # killprocess 58332 00:06:27.166 20:52:48 -- common/autotest_common.sh@936 -- # '[' -z 58332 ']' 00:06:27.166 20:52:48 -- common/autotest_common.sh@940 -- # kill -0 58332 00:06:27.166 20:52:48 -- common/autotest_common.sh@941 -- # uname 00:06:27.166 20:52:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.166 20:52:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58332 00:06:27.166 killing process with pid 58332 00:06:27.166 20:52:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.166 20:52:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.166 20:52:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58332' 00:06:27.166 20:52:48 -- common/autotest_common.sh@955 -- # kill 58332 00:06:27.166 20:52:48 -- common/autotest_common.sh@960 -- # wait 58332 00:06:29.071 ************************************ 00:06:29.071 END TEST locking_app_on_locked_coremask 00:06:29.071 ************************************ 00:06:29.071 00:06:29.071 real 0m3.833s 00:06:29.071 user 0m4.279s 00:06:29.071 sys 0m0.684s 00:06:29.071 20:52:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.071 20:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.071 20:52:49 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:29.071 20:52:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.071 20:52:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.071 20:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.071 ************************************ 00:06:29.071 START TEST locking_overlapped_coremask 00:06:29.071 ************************************ 00:06:29.071 20:52:49 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:29.071 20:52:49 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58401 00:06:29.071 20:52:49 -- event/cpu_locks.sh@133 -- # waitforlisten 58401 /var/tmp/spdk.sock 00:06:29.071 20:52:49 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:29.071 20:52:49 -- common/autotest_common.sh@829 -- # '[' -z 58401 ']' 00:06:29.071 20:52:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.071 20:52:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.071 20:52:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.071 20:52:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.071 20:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.071 [2024-12-08 20:52:49.948765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.071 [2024-12-08 20:52:49.949146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58401 ] 00:06:29.330 [2024-12-08 20:52:50.115560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.330 [2024-12-08 20:52:50.258399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.330 [2024-12-08 20:52:50.258748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.330 [2024-12-08 20:52:50.259038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.330 [2024-12-08 20:52:50.259053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.708 20:52:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.708 20:52:51 -- common/autotest_common.sh@862 -- # return 0 00:06:30.708 20:52:51 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:30.708 20:52:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58432 00:06:30.708 20:52:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58432 /var/tmp/spdk2.sock 00:06:30.708 20:52:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:30.708 20:52:51 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58432 /var/tmp/spdk2.sock 00:06:30.708 20:52:51 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:30.708 20:52:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.708 20:52:51 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:30.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.708 20:52:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.708 20:52:51 -- common/autotest_common.sh@653 -- # waitforlisten 58432 /var/tmp/spdk2.sock 00:06:30.708 20:52:51 -- common/autotest_common.sh@829 -- # '[' -z 58432 ']' 00:06:30.708 20:52:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.708 20:52:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.708 20:52:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.708 20:52:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.708 20:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:30.708 [2024-12-08 20:52:51.589422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.708 [2024-12-08 20:52:51.589570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58432 ] 00:06:30.967 [2024-12-08 20:52:51.757247] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58401 has claimed it. 00:06:30.967 [2024-12-08 20:52:51.757345] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.226 ERROR: process (pid: 58432) is no longer running 00:06:31.226 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (58432) - No such process 00:06:31.226 20:52:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.226 20:52:52 -- common/autotest_common.sh@862 -- # return 1 00:06:31.226 20:52:52 -- common/autotest_common.sh@653 -- # es=1 00:06:31.226 20:52:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.226 20:52:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.226 20:52:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.226 20:52:52 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:31.226 20:52:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.226 20:52:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.226 20:52:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.226 20:52:52 -- event/cpu_locks.sh@141 -- # killprocess 58401 00:06:31.226 20:52:52 -- common/autotest_common.sh@936 -- # '[' -z 58401 ']' 00:06:31.226 20:52:52 -- common/autotest_common.sh@940 -- # kill -0 58401 00:06:31.226 20:52:52 -- common/autotest_common.sh@941 -- # uname 00:06:31.484 20:52:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.485 20:52:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58401 00:06:31.485 20:52:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:31.485 20:52:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:31.485 20:52:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58401' 00:06:31.485 killing process with pid 58401 00:06:31.485 20:52:52 -- common/autotest_common.sh@955 -- # kill 58401 00:06:31.485 20:52:52 -- common/autotest_common.sh@960 -- # wait 58401 00:06:33.458 00:06:33.458 real 0m4.181s 00:06:33.458 user 0m11.459s 00:06:33.458 sys 0m0.509s 00:06:33.459 20:52:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.459 20:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:33.459 ************************************ 00:06:33.459 END TEST locking_overlapped_coremask 00:06:33.459 ************************************ 00:06:33.459 20:52:54 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:33.459 20:52:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.459 20:52:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.459 20:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:33.459 ************************************ 00:06:33.459 START TEST locking_overlapped_coremask_via_rpc 00:06:33.459 ************************************ 00:06:33.459 20:52:54 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:33.459 20:52:54 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58493 00:06:33.459 20:52:54 -- event/cpu_locks.sh@149 -- # waitforlisten 58493 /var/tmp/spdk.sock 00:06:33.459 20:52:54 -- common/autotest_common.sh@829 -- # '[' -z 58493 ']' 00:06:33.459 20:52:54 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:33.459 20:52:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.459 20:52:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.459 20:52:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.459 20:52:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.459 20:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:33.459 [2024-12-08 20:52:54.183258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.459 [2024-12-08 20:52:54.183423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58493 ] 00:06:33.459 [2024-12-08 20:52:54.352691] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.459 [2024-12-08 20:52:54.352883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.459 [2024-12-08 20:52:54.495286] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.459 [2024-12-08 20:52:54.495636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.459 [2024-12-08 20:52:54.495920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.459 [2024-12-08 20:52:54.495928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.832 20:52:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.832 20:52:55 -- common/autotest_common.sh@862 -- # return 0 00:06:34.832 20:52:55 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:34.832 20:52:55 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58518 00:06:34.832 20:52:55 -- event/cpu_locks.sh@153 -- # waitforlisten 58518 /var/tmp/spdk2.sock 00:06:34.832 20:52:55 -- common/autotest_common.sh@829 -- # '[' -z 58518 ']' 00:06:34.832 20:52:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.832 20:52:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.832 20:52:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.832 20:52:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.832 20:52:55 -- common/autotest_common.sh@10 -- # set +x 00:06:34.832 [2024-12-08 20:52:55.817229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.832 [2024-12-08 20:52:55.817586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58518 ] 00:06:35.091 [2024-12-08 20:52:55.981431] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.091 [2024-12-08 20:52:55.981495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.350 [2024-12-08 20:52:56.302729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.350 [2024-12-08 20:52:56.303233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.350 [2024-12-08 20:52:56.304860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.350 [2024-12-08 20:52:56.305047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:37.253 20:52:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.253 20:52:58 -- common/autotest_common.sh@862 -- # return 0 00:06:37.253 20:52:58 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.253 20:52:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.253 20:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:37.253 20:52:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.253 20:52:58 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.253 20:52:58 -- common/autotest_common.sh@650 -- # local es=0 00:06:37.253 20:52:58 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.253 20:52:58 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:37.253 20:52:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.253 20:52:58 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:37.253 20:52:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.253 20:52:58 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.253 20:52:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.253 20:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:37.253 [2024-12-08 20:52:58.255284] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58493 has claimed it. 00:06:37.253 request: 00:06:37.253 { 00:06:37.253 "method": "framework_enable_cpumask_locks", 00:06:37.253 "req_id": 1 00:06:37.253 } 00:06:37.253 Got JSON-RPC error response 00:06:37.253 response: 00:06:37.253 { 00:06:37.253 "code": -32603, 00:06:37.253 "message": "Failed to claim CPU core: 2" 00:06:37.253 } 00:06:37.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.253 20:52:58 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:37.253 20:52:58 -- common/autotest_common.sh@653 -- # es=1 00:06:37.253 20:52:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.253 20:52:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.253 20:52:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.253 20:52:58 -- event/cpu_locks.sh@158 -- # waitforlisten 58493 /var/tmp/spdk.sock 00:06:37.253 20:52:58 -- common/autotest_common.sh@829 -- # '[' -z 58493 ']' 00:06:37.253 20:52:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.253 20:52:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.253 20:52:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.253 20:52:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.253 20:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:37.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.511 20:52:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.511 20:52:58 -- common/autotest_common.sh@862 -- # return 0 00:06:37.511 20:52:58 -- event/cpu_locks.sh@159 -- # waitforlisten 58518 /var/tmp/spdk2.sock 00:06:37.511 20:52:58 -- common/autotest_common.sh@829 -- # '[' -z 58518 ']' 00:06:37.511 20:52:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.511 20:52:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.511 20:52:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.511 20:52:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.511 20:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:37.768 ************************************ 00:06:37.768 END TEST locking_overlapped_coremask_via_rpc 00:06:37.768 ************************************ 00:06:37.768 20:52:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.768 20:52:58 -- common/autotest_common.sh@862 -- # return 0 00:06:37.768 20:52:58 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:37.768 20:52:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:37.768 20:52:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:37.768 20:52:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:37.768 00:06:37.768 real 0m4.733s 00:06:37.768 user 0m1.925s 00:06:37.768 sys 0m0.231s 00:06:37.768 20:52:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.768 20:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:38.025 20:52:58 -- event/cpu_locks.sh@174 -- # cleanup 00:06:38.025 20:52:58 -- event/cpu_locks.sh@15 -- # [[ -z 58493 ]] 00:06:38.025 20:52:58 -- event/cpu_locks.sh@15 -- # killprocess 58493 00:06:38.025 20:52:58 -- common/autotest_common.sh@936 -- # '[' -z 58493 ']' 00:06:38.025 20:52:58 -- common/autotest_common.sh@940 -- # kill -0 58493 00:06:38.025 20:52:58 -- common/autotest_common.sh@941 -- # uname 00:06:38.025 20:52:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:38.025 20:52:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58493 00:06:38.025 killing process with pid 58493 00:06:38.025 20:52:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:38.025 20:52:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:38.025 20:52:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58493' 00:06:38.025 20:52:58 -- common/autotest_common.sh@955 -- # kill 58493 00:06:38.025 20:52:58 -- common/autotest_common.sh@960 -- # wait 58493 00:06:39.926 20:53:00 -- event/cpu_locks.sh@16 -- # [[ -z 58518 ]] 00:06:39.926 20:53:00 -- event/cpu_locks.sh@16 -- # killprocess 58518 00:06:39.926 20:53:00 -- common/autotest_common.sh@936 -- # '[' -z 58518 ']' 00:06:39.926 20:53:00 -- common/autotest_common.sh@940 -- # kill -0 58518 00:06:39.926 20:53:00 -- common/autotest_common.sh@941 -- # uname 00:06:39.926 20:53:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.926 20:53:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58518 00:06:39.926 killing process with pid 58518 00:06:39.926 20:53:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:39.926 20:53:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:39.926 20:53:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58518' 00:06:39.926 20:53:00 -- common/autotest_common.sh@955 -- # kill 58518 00:06:39.926 20:53:00 -- common/autotest_common.sh@960 -- # wait 58518 00:06:41.832 20:53:02 -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.832 20:53:02 -- event/cpu_locks.sh@1 -- # cleanup 00:06:41.832 20:53:02 -- event/cpu_locks.sh@15 -- # [[ -z 58493 ]] 00:06:41.832 20:53:02 -- event/cpu_locks.sh@15 -- # killprocess 58493 00:06:41.832 20:53:02 -- common/autotest_common.sh@936 -- # '[' -z 58493 ']' 00:06:41.832 20:53:02 -- common/autotest_common.sh@940 -- # kill -0 58493 00:06:41.832 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58493) - No such process 00:06:41.832 Process with pid 58493 is not found 00:06:41.832 20:53:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58493 is not found' 00:06:41.832 Process with pid 58518 is not found 00:06:41.832 20:53:02 -- event/cpu_locks.sh@16 -- # [[ -z 58518 ]] 00:06:41.832 20:53:02 -- event/cpu_locks.sh@16 -- # killprocess 58518 00:06:41.832 20:53:02 -- common/autotest_common.sh@936 -- # '[' -z 58518 ']' 00:06:41.832 20:53:02 -- common/autotest_common.sh@940 -- # kill -0 58518 00:06:41.832 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58518) - No such process 00:06:41.832 20:53:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58518 is not found' 00:06:41.832 20:53:02 -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.832 ************************************ 00:06:41.832 END TEST cpu_locks 00:06:41.832 ************************************ 00:06:41.832 00:06:41.832 real 0m40.408s 00:06:41.832 user 1m14.416s 00:06:41.832 sys 0m5.707s 00:06:41.832 20:53:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.832 20:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:41.832 ************************************ 00:06:41.832 END TEST event 00:06:41.832 ************************************ 00:06:41.832 00:06:41.832 real 1m11.024s 00:06:41.832 user 2m14.458s 00:06:41.832 sys 0m9.126s 00:06:41.832 20:53:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.832 20:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:41.832 20:53:02 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:41.832 20:53:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.832 20:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.832 20:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:41.832 ************************************ 00:06:41.832 START TEST thread 00:06:41.832 ************************************ 00:06:41.832 20:53:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:41.832 * Looking for test storage... 00:06:41.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:41.832 20:53:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:41.832 20:53:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:41.832 20:53:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:41.832 20:53:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:41.832 20:53:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:41.832 20:53:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:41.832 20:53:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:41.832 20:53:02 -- scripts/common.sh@335 -- # IFS=.-: 00:06:41.832 20:53:02 -- scripts/common.sh@335 -- # read -ra ver1 00:06:41.832 20:53:02 -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.832 20:53:02 -- scripts/common.sh@336 -- # read -ra ver2 00:06:41.832 20:53:02 -- scripts/common.sh@337 -- # local 'op=<' 00:06:41.832 20:53:02 -- scripts/common.sh@339 -- # ver1_l=2 00:06:41.832 20:53:02 -- scripts/common.sh@340 -- # ver2_l=1 00:06:41.832 20:53:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:41.832 20:53:02 -- scripts/common.sh@343 -- # case "$op" in 00:06:41.832 20:53:02 -- scripts/common.sh@344 -- # : 1 00:06:41.832 20:53:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:41.832 20:53:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.832 20:53:02 -- scripts/common.sh@364 -- # decimal 1 00:06:41.832 20:53:02 -- scripts/common.sh@352 -- # local d=1 00:06:41.832 20:53:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.832 20:53:02 -- scripts/common.sh@354 -- # echo 1 00:06:41.832 20:53:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:41.832 20:53:02 -- scripts/common.sh@365 -- # decimal 2 00:06:41.832 20:53:02 -- scripts/common.sh@352 -- # local d=2 00:06:41.832 20:53:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.832 20:53:02 -- scripts/common.sh@354 -- # echo 2 00:06:41.832 20:53:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:41.832 20:53:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:41.832 20:53:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:41.832 20:53:02 -- scripts/common.sh@367 -- # return 0 00:06:41.832 20:53:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.832 20:53:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:41.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.832 --rc genhtml_branch_coverage=1 00:06:41.832 --rc genhtml_function_coverage=1 00:06:41.832 --rc genhtml_legend=1 00:06:41.832 --rc geninfo_all_blocks=1 00:06:41.832 --rc geninfo_unexecuted_blocks=1 00:06:41.832 00:06:41.832 ' 00:06:41.832 20:53:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:41.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.832 --rc genhtml_branch_coverage=1 00:06:41.832 --rc genhtml_function_coverage=1 00:06:41.832 --rc genhtml_legend=1 00:06:41.832 --rc geninfo_all_blocks=1 00:06:41.832 --rc geninfo_unexecuted_blocks=1 00:06:41.832 00:06:41.832 ' 00:06:41.832 20:53:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:41.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.832 --rc genhtml_branch_coverage=1 00:06:41.832 --rc genhtml_function_coverage=1 00:06:41.832 --rc genhtml_legend=1 00:06:41.832 --rc geninfo_all_blocks=1 00:06:41.832 --rc geninfo_unexecuted_blocks=1 00:06:41.832 00:06:41.832 ' 00:06:41.832 20:53:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:41.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.832 --rc genhtml_branch_coverage=1 00:06:41.832 --rc genhtml_function_coverage=1 00:06:41.832 --rc genhtml_legend=1 00:06:41.832 --rc geninfo_all_blocks=1 00:06:41.832 --rc geninfo_unexecuted_blocks=1 00:06:41.832 00:06:41.832 ' 00:06:41.832 20:53:02 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.832 20:53:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:41.832 20:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.832 20:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:41.832 ************************************ 00:06:41.832 START TEST thread_poller_perf 00:06:41.832 ************************************ 00:06:41.833 20:53:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.833 [2024-12-08 20:53:02.713266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.833 [2024-12-08 20:53:02.713528] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58706 ] 00:06:41.833 [2024-12-08 20:53:02.871152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.092 [2024-12-08 20:53:03.094380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.092 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.471 [2024-12-08T20:53:04.514Z] ====================================== 00:06:43.471 [2024-12-08T20:53:04.514Z] busy:2212599544 (cyc) 00:06:43.471 [2024-12-08T20:53:04.514Z] total_run_count: 365000 00:06:43.471 [2024-12-08T20:53:04.514Z] tsc_hz: 2200000000 (cyc) 00:06:43.471 [2024-12-08T20:53:04.514Z] ====================================== 00:06:43.471 [2024-12-08T20:53:04.514Z] poller_cost: 6061 (cyc), 2755 (nsec) 00:06:43.471 00:06:43.471 real 0m1.700s 00:06:43.471 user 0m1.498s 00:06:43.471 sys 0m0.092s 00:06:43.471 ************************************ 00:06:43.471 END TEST thread_poller_perf 00:06:43.471 ************************************ 00:06:43.471 20:53:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.471 20:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:43.471 20:53:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.471 20:53:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:43.471 20:53:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.471 20:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:43.471 ************************************ 00:06:43.471 START TEST thread_poller_perf 00:06:43.471 ************************************ 00:06:43.471 20:53:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.471 [2024-12-08 20:53:04.476479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.471 [2024-12-08 20:53:04.476799] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58743 ] 00:06:43.730 [2024-12-08 20:53:04.642726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.990 [2024-12-08 20:53:04.795786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.990 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.379 [2024-12-08T20:53:06.422Z] ====================================== 00:06:45.379 [2024-12-08T20:53:06.422Z] busy:2204707042 (cyc) 00:06:45.379 [2024-12-08T20:53:06.422Z] total_run_count: 4834000 00:06:45.379 [2024-12-08T20:53:06.422Z] tsc_hz: 2200000000 (cyc) 00:06:45.379 [2024-12-08T20:53:06.422Z] ====================================== 00:06:45.379 [2024-12-08T20:53:06.422Z] poller_cost: 456 (cyc), 207 (nsec) 00:06:45.379 ************************************ 00:06:45.379 00:06:45.379 real 0m1.641s 00:06:45.379 user 0m1.445s 00:06:45.379 sys 0m0.087s 00:06:45.379 20:53:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.379 20:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.379 END TEST thread_poller_perf 00:06:45.379 ************************************ 00:06:45.379 20:53:06 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:45.379 ************************************ 00:06:45.379 END TEST thread 00:06:45.379 ************************************ 00:06:45.379 00:06:45.379 real 0m3.614s 00:06:45.379 user 0m3.083s 00:06:45.379 sys 0m0.304s 00:06:45.379 20:53:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.379 20:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.379 20:53:06 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:45.379 20:53:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.379 20:53:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.379 20:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.379 ************************************ 00:06:45.379 START TEST accel 00:06:45.379 ************************************ 00:06:45.379 20:53:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:45.379 * Looking for test storage... 00:06:45.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:45.379 20:53:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:45.379 20:53:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:45.379 20:53:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:45.379 20:53:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:45.379 20:53:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:45.379 20:53:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:45.379 20:53:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:45.379 20:53:06 -- scripts/common.sh@335 -- # IFS=.-: 00:06:45.379 20:53:06 -- scripts/common.sh@335 -- # read -ra ver1 00:06:45.379 20:53:06 -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.379 20:53:06 -- scripts/common.sh@336 -- # read -ra ver2 00:06:45.379 20:53:06 -- scripts/common.sh@337 -- # local 'op=<' 00:06:45.379 20:53:06 -- scripts/common.sh@339 -- # ver1_l=2 00:06:45.379 20:53:06 -- scripts/common.sh@340 -- # ver2_l=1 00:06:45.379 20:53:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:45.379 20:53:06 -- scripts/common.sh@343 -- # case "$op" in 00:06:45.379 20:53:06 -- scripts/common.sh@344 -- # : 1 00:06:45.379 20:53:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:45.379 20:53:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.379 20:53:06 -- scripts/common.sh@364 -- # decimal 1 00:06:45.379 20:53:06 -- scripts/common.sh@352 -- # local d=1 00:06:45.379 20:53:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.379 20:53:06 -- scripts/common.sh@354 -- # echo 1 00:06:45.379 20:53:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:45.379 20:53:06 -- scripts/common.sh@365 -- # decimal 2 00:06:45.379 20:53:06 -- scripts/common.sh@352 -- # local d=2 00:06:45.379 20:53:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.379 20:53:06 -- scripts/common.sh@354 -- # echo 2 00:06:45.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.379 20:53:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:45.379 20:53:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:45.379 20:53:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:45.379 20:53:06 -- scripts/common.sh@367 -- # return 0 00:06:45.379 20:53:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.379 20:53:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:45.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.379 --rc genhtml_branch_coverage=1 00:06:45.379 --rc genhtml_function_coverage=1 00:06:45.379 --rc genhtml_legend=1 00:06:45.379 --rc geninfo_all_blocks=1 00:06:45.379 --rc geninfo_unexecuted_blocks=1 00:06:45.379 00:06:45.379 ' 00:06:45.379 20:53:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:45.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.379 --rc genhtml_branch_coverage=1 00:06:45.379 --rc genhtml_function_coverage=1 00:06:45.379 --rc genhtml_legend=1 00:06:45.379 --rc geninfo_all_blocks=1 00:06:45.379 --rc geninfo_unexecuted_blocks=1 00:06:45.379 00:06:45.379 ' 00:06:45.379 20:53:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:45.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.379 --rc genhtml_branch_coverage=1 00:06:45.379 --rc genhtml_function_coverage=1 00:06:45.379 --rc genhtml_legend=1 00:06:45.379 --rc geninfo_all_blocks=1 00:06:45.379 --rc geninfo_unexecuted_blocks=1 00:06:45.379 00:06:45.379 ' 00:06:45.379 20:53:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:45.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.379 --rc genhtml_branch_coverage=1 00:06:45.379 --rc genhtml_function_coverage=1 00:06:45.379 --rc genhtml_legend=1 00:06:45.379 --rc geninfo_all_blocks=1 00:06:45.379 --rc geninfo_unexecuted_blocks=1 00:06:45.379 00:06:45.379 ' 00:06:45.379 20:53:06 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:45.379 20:53:06 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:45.379 20:53:06 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.379 20:53:06 -- accel/accel.sh@59 -- # spdk_tgt_pid=58831 00:06:45.379 20:53:06 -- accel/accel.sh@60 -- # waitforlisten 58831 00:06:45.379 20:53:06 -- common/autotest_common.sh@829 -- # '[' -z 58831 ']' 00:06:45.379 20:53:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.379 20:53:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.379 20:53:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.379 20:53:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.379 20:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.379 20:53:06 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:45.379 20:53:06 -- accel/accel.sh@58 -- # build_accel_config 00:06:45.379 20:53:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.380 20:53:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.380 20:53:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.380 20:53:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.380 20:53:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.380 20:53:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.380 20:53:06 -- accel/accel.sh@42 -- # jq -r . 00:06:45.639 [2024-12-08 20:53:06.463042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.639 [2024-12-08 20:53:06.463467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58831 ] 00:06:45.639 [2024-12-08 20:53:06.628819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.898 [2024-12-08 20:53:06.771459] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.898 [2024-12-08 20:53:06.771899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.468 20:53:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.468 20:53:07 -- common/autotest_common.sh@862 -- # return 0 00:06:46.468 20:53:07 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:46.468 20:53:07 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:46.468 20:53:07 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:46.468 20:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.468 20:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:46.468 20:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # IFS== 00:06:46.468 20:53:07 -- accel/accel.sh@64 -- # read -r opc module 00:06:46.468 20:53:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:46.468 20:53:07 -- accel/accel.sh@67 -- # killprocess 58831 00:06:46.468 20:53:07 -- common/autotest_common.sh@936 -- # '[' -z 58831 ']' 00:06:46.468 20:53:07 -- common/autotest_common.sh@940 -- # kill -0 58831 00:06:46.468 20:53:07 -- common/autotest_common.sh@941 -- # uname 00:06:46.468 20:53:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.468 20:53:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58831 00:06:46.468 20:53:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.468 20:53:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.468 20:53:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58831' 00:06:46.468 killing process with pid 58831 00:06:46.469 20:53:07 -- common/autotest_common.sh@955 -- # kill 58831 00:06:46.469 20:53:07 -- common/autotest_common.sh@960 -- # wait 58831 00:06:48.376 20:53:09 -- accel/accel.sh@68 -- # trap - ERR 00:06:48.376 20:53:09 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:48.376 20:53:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:48.376 20:53:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.376 20:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.376 20:53:09 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:48.376 20:53:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:48.376 20:53:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.376 20:53:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.376 20:53:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.376 20:53:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.376 20:53:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.376 20:53:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.376 20:53:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.376 20:53:09 -- accel/accel.sh@42 -- # jq -r . 00:06:48.376 20:53:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.376 20:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.376 20:53:09 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:48.376 20:53:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:48.376 20:53:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.376 20:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.376 ************************************ 00:06:48.376 START TEST accel_missing_filename 00:06:48.376 ************************************ 00:06:48.376 20:53:09 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:48.376 20:53:09 -- common/autotest_common.sh@650 -- # local es=0 00:06:48.376 20:53:09 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:48.376 20:53:09 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:48.376 20:53:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.376 20:53:09 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:48.376 20:53:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.376 20:53:09 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:48.376 20:53:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:48.376 20:53:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.376 20:53:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.376 20:53:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.376 20:53:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.376 20:53:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.376 20:53:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.376 20:53:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.376 20:53:09 -- accel/accel.sh@42 -- # jq -r . 00:06:48.376 [2024-12-08 20:53:09.272919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.376 [2024-12-08 20:53:09.273212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:06:48.636 [2024-12-08 20:53:09.428096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.636 [2024-12-08 20:53:09.570364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.896 [2024-12-08 20:53:09.712897] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.156 [2024-12-08 20:53:10.079350] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:49.417 A filename is required. 00:06:49.417 20:53:10 -- common/autotest_common.sh@653 -- # es=234 00:06:49.417 20:53:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.417 ************************************ 00:06:49.417 END TEST accel_missing_filename 00:06:49.417 ************************************ 00:06:49.417 20:53:10 -- common/autotest_common.sh@662 -- # es=106 00:06:49.417 20:53:10 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:49.417 20:53:10 -- common/autotest_common.sh@670 -- # es=1 00:06:49.417 20:53:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.417 00:06:49.417 real 0m1.130s 00:06:49.417 user 0m0.946s 00:06:49.417 sys 0m0.127s 00:06:49.417 20:53:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.417 20:53:10 -- common/autotest_common.sh@10 -- # set +x 00:06:49.417 20:53:10 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:49.417 20:53:10 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:49.417 20:53:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.417 20:53:10 -- common/autotest_common.sh@10 -- # set +x 00:06:49.417 ************************************ 00:06:49.417 START TEST accel_compress_verify 00:06:49.417 ************************************ 00:06:49.417 20:53:10 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:49.417 20:53:10 -- common/autotest_common.sh@650 -- # local es=0 00:06:49.417 20:53:10 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:49.417 20:53:10 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:49.417 20:53:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.417 20:53:10 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:49.417 20:53:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.417 20:53:10 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:49.417 20:53:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:49.417 20:53:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.417 20:53:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.417 20:53:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.417 20:53:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.417 20:53:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.417 20:53:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.417 20:53:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.417 20:53:10 -- accel/accel.sh@42 -- # jq -r . 00:06:49.417 [2024-12-08 20:53:10.454098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.417 [2024-12-08 20:53:10.454248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58932 ] 00:06:49.678 [2024-12-08 20:53:10.607634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.938 [2024-12-08 20:53:10.750405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.938 [2024-12-08 20:53:10.890407] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.507 [2024-12-08 20:53:11.247351] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:50.507 00:06:50.507 Compression does not support the verify option, aborting. 00:06:50.507 20:53:11 -- common/autotest_common.sh@653 -- # es=161 00:06:50.507 20:53:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.507 20:53:11 -- common/autotest_common.sh@662 -- # es=33 00:06:50.507 20:53:11 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:50.507 20:53:11 -- common/autotest_common.sh@670 -- # es=1 00:06:50.507 20:53:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.507 00:06:50.507 real 0m1.119s 00:06:50.507 user 0m0.937s 00:06:50.507 sys 0m0.127s 00:06:50.507 ************************************ 00:06:50.507 END TEST accel_compress_verify 00:06:50.507 ************************************ 00:06:50.507 20:53:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.507 20:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:50.784 20:53:11 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:50.784 20:53:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:50.784 20:53:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.784 20:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:50.784 ************************************ 00:06:50.784 START TEST accel_wrong_workload 00:06:50.784 ************************************ 00:06:50.784 20:53:11 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:50.784 20:53:11 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.784 20:53:11 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:50.784 20:53:11 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.784 20:53:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.784 20:53:11 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.784 20:53:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.784 20:53:11 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:50.784 20:53:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:50.784 20:53:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.784 20:53:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.784 20:53:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.784 20:53:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.784 20:53:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.784 20:53:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.784 20:53:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.784 20:53:11 -- accel/accel.sh@42 -- # jq -r . 00:06:50.784 Unsupported workload type: foobar 00:06:50.784 [2024-12-08 20:53:11.637817] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:50.784 accel_perf options: 00:06:50.784 [-h help message] 00:06:50.784 [-q queue depth per core] 00:06:50.784 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.784 [-T number of threads per core 00:06:50.784 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.784 [-t time in seconds] 00:06:50.784 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.784 [ dif_verify, , dif_generate, dif_generate_copy 00:06:50.784 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.784 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.784 [-S for crc32c workload, use this seed value (default 0) 00:06:50.784 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.784 [-f for fill workload, use this BYTE value (default 255) 00:06:50.784 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.784 [-y verify result if this switch is on] 00:06:50.784 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.784 Can be used to spread operations across a wider range of memory. 00:06:50.784 20:53:11 -- common/autotest_common.sh@653 -- # es=1 00:06:50.784 20:53:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.784 20:53:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.784 20:53:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.784 00:06:50.784 real 0m0.078s 00:06:50.784 user 0m0.085s 00:06:50.784 sys 0m0.045s 00:06:50.784 20:53:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.784 20:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:50.784 ************************************ 00:06:50.784 END TEST accel_wrong_workload 00:06:50.784 ************************************ 00:06:50.784 20:53:11 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:50.784 20:53:11 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:50.784 20:53:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.784 20:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:50.784 ************************************ 00:06:50.784 START TEST accel_negative_buffers 00:06:50.784 ************************************ 00:06:50.784 20:53:11 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:50.784 20:53:11 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.784 20:53:11 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:50.784 20:53:11 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.784 20:53:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.784 20:53:11 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.784 20:53:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.784 20:53:11 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:50.784 20:53:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:50.784 20:53:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.784 20:53:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.784 20:53:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.784 20:53:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.784 20:53:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.784 20:53:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.784 20:53:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.784 20:53:11 -- accel/accel.sh@42 -- # jq -r . 00:06:50.784 -x option must be non-negative. 00:06:50.784 [2024-12-08 20:53:11.767418] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:50.784 accel_perf options: 00:06:50.784 [-h help message] 00:06:50.784 [-q queue depth per core] 00:06:50.784 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.784 [-T number of threads per core 00:06:50.784 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.784 [-t time in seconds] 00:06:50.784 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.784 [ dif_verify, , dif_generate, dif_generate_copy 00:06:50.784 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.784 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.784 [-S for crc32c workload, use this seed value (default 0) 00:06:50.784 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.784 [-f for fill workload, use this BYTE value (default 255) 00:06:50.784 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.784 [-y verify result if this switch is on] 00:06:50.784 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.784 Can be used to spread operations across a wider range of memory. 00:06:50.784 20:53:11 -- common/autotest_common.sh@653 -- # es=1 00:06:50.784 20:53:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.784 20:53:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.784 20:53:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.784 00:06:50.784 real 0m0.080s 00:06:50.784 user 0m0.096s 00:06:50.784 sys 0m0.040s 00:06:50.784 20:53:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.784 20:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:50.784 ************************************ 00:06:50.784 END TEST accel_negative_buffers 00:06:50.784 ************************************ 00:06:51.043 20:53:11 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:51.043 20:53:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:51.043 20:53:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.043 20:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:51.043 ************************************ 00:06:51.043 START TEST accel_crc32c 00:06:51.043 ************************************ 00:06:51.043 20:53:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:51.043 20:53:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.043 20:53:11 -- accel/accel.sh@17 -- # local accel_module 00:06:51.043 20:53:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:51.043 20:53:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.043 20:53:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:51.043 20:53:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.043 20:53:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.043 20:53:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.043 20:53:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.043 20:53:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.043 20:53:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.043 20:53:11 -- accel/accel.sh@42 -- # jq -r . 00:06:51.043 [2024-12-08 20:53:11.898249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.043 [2024-12-08 20:53:11.898399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58999 ] 00:06:51.043 [2024-12-08 20:53:12.067459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.302 [2024-12-08 20:53:12.218594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.203 20:53:13 -- accel/accel.sh@18 -- # out=' 00:06:53.203 SPDK Configuration: 00:06:53.203 Core mask: 0x1 00:06:53.203 00:06:53.203 Accel Perf Configuration: 00:06:53.203 Workload Type: crc32c 00:06:53.203 CRC-32C seed: 32 00:06:53.203 Transfer size: 4096 bytes 00:06:53.203 Vector count 1 00:06:53.203 Module: software 00:06:53.203 Queue depth: 32 00:06:53.203 Allocate depth: 32 00:06:53.203 # threads/core: 1 00:06:53.203 Run time: 1 seconds 00:06:53.203 Verify: Yes 00:06:53.203 00:06:53.203 Running for 1 seconds... 00:06:53.203 00:06:53.203 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.203 ------------------------------------------------------------------------------------ 00:06:53.203 0,0 499072/s 1949 MiB/s 0 0 00:06:53.203 ==================================================================================== 00:06:53.203 Total 499072/s 1949 MiB/s 0 0' 00:06:53.203 20:53:13 -- accel/accel.sh@20 -- # IFS=: 00:06:53.203 20:53:13 -- accel/accel.sh@20 -- # read -r var val 00:06:53.203 20:53:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:53.203 20:53:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:53.203 20:53:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.203 20:53:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.203 20:53:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.203 20:53:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.203 20:53:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.203 20:53:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.203 20:53:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.203 20:53:13 -- accel/accel.sh@42 -- # jq -r . 00:06:53.203 [2024-12-08 20:53:14.048303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.203 [2024-12-08 20:53:14.048453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59025 ] 00:06:53.203 [2024-12-08 20:53:14.218886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.462 [2024-12-08 20:53:14.362943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val= 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val= 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val=0x1 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val= 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val= 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val=crc32c 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val=32 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val= 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val=software 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val=32 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val=32 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val=1 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val=Yes 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val= 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:53.721 20:53:14 -- accel/accel.sh@21 -- # val= 00:06:53.721 20:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:53.721 20:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.095 20:53:16 -- accel/accel.sh@21 -- # val= 00:06:55.095 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.095 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:55.095 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:55.095 20:53:16 -- accel/accel.sh@21 -- # val= 00:06:55.095 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.095 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:55.095 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:55.095 20:53:16 -- accel/accel.sh@21 -- # val= 00:06:55.095 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.095 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:55.096 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:55.096 20:53:16 -- accel/accel.sh@21 -- # val= 00:06:55.096 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.096 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:55.096 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:55.096 20:53:16 -- accel/accel.sh@21 -- # val= 00:06:55.096 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.096 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:55.096 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:55.096 20:53:16 -- accel/accel.sh@21 -- # val= 00:06:55.096 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.096 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:55.096 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:55.355 20:53:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.355 20:53:16 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:55.355 20:53:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.355 00:06:55.355 real 0m4.296s 00:06:55.355 user 0m3.795s 00:06:55.355 sys 0m0.294s 00:06:55.355 20:53:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.355 20:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 ************************************ 00:06:55.355 END TEST accel_crc32c 00:06:55.355 ************************************ 00:06:55.355 20:53:16 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:55.355 20:53:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:55.355 20:53:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.355 20:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 ************************************ 00:06:55.355 START TEST accel_crc32c_C2 00:06:55.355 ************************************ 00:06:55.355 20:53:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:55.355 20:53:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.355 20:53:16 -- accel/accel.sh@17 -- # local accel_module 00:06:55.355 20:53:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:55.355 20:53:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:55.355 20:53:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.355 20:53:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.355 20:53:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.355 20:53:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.355 20:53:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.355 20:53:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.355 20:53:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.355 20:53:16 -- accel/accel.sh@42 -- # jq -r . 00:06:55.355 [2024-12-08 20:53:16.247105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.355 [2024-12-08 20:53:16.247260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:06:55.615 [2024-12-08 20:53:16.415276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.615 [2024-12-08 20:53:16.563519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.521 20:53:18 -- accel/accel.sh@18 -- # out=' 00:06:57.521 SPDK Configuration: 00:06:57.521 Core mask: 0x1 00:06:57.521 00:06:57.521 Accel Perf Configuration: 00:06:57.521 Workload Type: crc32c 00:06:57.521 CRC-32C seed: 0 00:06:57.521 Transfer size: 4096 bytes 00:06:57.521 Vector count 2 00:06:57.521 Module: software 00:06:57.521 Queue depth: 32 00:06:57.521 Allocate depth: 32 00:06:57.521 # threads/core: 1 00:06:57.521 Run time: 1 seconds 00:06:57.521 Verify: Yes 00:06:57.521 00:06:57.521 Running for 1 seconds... 00:06:57.521 00:06:57.521 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.521 ------------------------------------------------------------------------------------ 00:06:57.521 0,0 391328/s 3057 MiB/s 0 0 00:06:57.521 ==================================================================================== 00:06:57.521 Total 391328/s 1528 MiB/s 0 0' 00:06:57.521 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:57.521 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:57.521 20:53:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:57.521 20:53:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:57.521 20:53:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.521 20:53:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.521 20:53:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.521 20:53:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.521 20:53:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.521 20:53:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.521 20:53:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.521 20:53:18 -- accel/accel.sh@42 -- # jq -r . 00:06:57.521 [2024-12-08 20:53:18.389701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.521 [2024-12-08 20:53:18.389853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59092 ] 00:06:57.521 [2024-12-08 20:53:18.557484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.780 [2024-12-08 20:53:18.700455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.039 20:53:18 -- accel/accel.sh@21 -- # val= 00:06:58.039 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.039 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.039 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.039 20:53:18 -- accel/accel.sh@21 -- # val= 00:06:58.039 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.039 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.039 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.039 20:53:18 -- accel/accel.sh@21 -- # val=0x1 00:06:58.039 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.039 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val= 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val= 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val=crc32c 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val=0 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val= 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val=software 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val=32 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val=32 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val=1 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val=Yes 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val= 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.040 20:53:18 -- accel/accel.sh@21 -- # val= 00:06:58.040 20:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.040 20:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:59.419 20:53:20 -- accel/accel.sh@21 -- # val= 00:06:59.419 20:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.419 20:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.419 20:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.678 20:53:20 -- accel/accel.sh@21 -- # val= 00:06:59.679 20:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.679 20:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.679 20:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.679 20:53:20 -- accel/accel.sh@21 -- # val= 00:06:59.679 20:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.679 20:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.679 20:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.679 20:53:20 -- accel/accel.sh@21 -- # val= 00:06:59.679 20:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.679 20:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.679 20:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.679 20:53:20 -- accel/accel.sh@21 -- # val= 00:06:59.679 20:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.679 20:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.679 20:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.679 20:53:20 -- accel/accel.sh@21 -- # val= 00:06:59.679 20:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.679 20:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.679 20:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.679 20:53:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.679 20:53:20 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:59.679 20:53:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.679 ************************************ 00:06:59.679 END TEST accel_crc32c_C2 00:06:59.679 ************************************ 00:06:59.679 00:06:59.679 real 0m4.281s 00:06:59.679 user 0m3.804s 00:06:59.679 sys 0m0.273s 00:06:59.679 20:53:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.679 20:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:59.679 20:53:20 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:59.679 20:53:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:59.679 20:53:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.679 20:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:59.679 ************************************ 00:06:59.679 START TEST accel_copy 00:06:59.679 ************************************ 00:06:59.679 20:53:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:59.679 20:53:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.679 20:53:20 -- accel/accel.sh@17 -- # local accel_module 00:06:59.679 20:53:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:59.679 20:53:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.679 20:53:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.679 20:53:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.679 20:53:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.679 20:53:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.679 20:53:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.679 20:53:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.679 20:53:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.679 20:53:20 -- accel/accel.sh@42 -- # jq -r . 00:06:59.679 [2024-12-08 20:53:20.562130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.679 [2024-12-08 20:53:20.562238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59143 ] 00:06:59.679 [2024-12-08 20:53:20.711054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.939 [2024-12-08 20:53:20.855232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.847 20:53:22 -- accel/accel.sh@18 -- # out=' 00:07:01.847 SPDK Configuration: 00:07:01.847 Core mask: 0x1 00:07:01.847 00:07:01.847 Accel Perf Configuration: 00:07:01.847 Workload Type: copy 00:07:01.847 Transfer size: 4096 bytes 00:07:01.847 Vector count 1 00:07:01.847 Module: software 00:07:01.847 Queue depth: 32 00:07:01.847 Allocate depth: 32 00:07:01.847 # threads/core: 1 00:07:01.847 Run time: 1 seconds 00:07:01.847 Verify: Yes 00:07:01.847 00:07:01.847 Running for 1 seconds... 00:07:01.847 00:07:01.847 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.847 ------------------------------------------------------------------------------------ 00:07:01.847 0,0 308416/s 1204 MiB/s 0 0 00:07:01.847 ==================================================================================== 00:07:01.847 Total 308416/s 1204 MiB/s 0 0' 00:07:01.847 20:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:01.847 20:53:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:01.847 20:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:01.847 20:53:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:01.847 20:53:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.847 20:53:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.847 20:53:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.847 20:53:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.847 20:53:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.847 20:53:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.847 20:53:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.847 20:53:22 -- accel/accel.sh@42 -- # jq -r . 00:07:01.847 [2024-12-08 20:53:22.680619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.847 [2024-12-08 20:53:22.680932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59169 ] 00:07:01.847 [2024-12-08 20:53:22.846837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.106 [2024-12-08 20:53:22.991531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.106 20:53:23 -- accel/accel.sh@21 -- # val= 00:07:02.106 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.106 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.106 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val= 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val=0x1 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val= 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val= 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val=copy 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val= 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val=software 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val=32 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val=32 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val=1 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val=Yes 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val= 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.107 20:53:23 -- accel/accel.sh@21 -- # val= 00:07:02.107 20:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:02.107 20:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.023 20:53:24 -- accel/accel.sh@21 -- # val= 00:07:04.023 20:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.023 20:53:24 -- accel/accel.sh@21 -- # val= 00:07:04.023 20:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.023 20:53:24 -- accel/accel.sh@21 -- # val= 00:07:04.023 20:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.023 20:53:24 -- accel/accel.sh@21 -- # val= 00:07:04.023 20:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.023 20:53:24 -- accel/accel.sh@21 -- # val= 00:07:04.023 20:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.023 20:53:24 -- accel/accel.sh@21 -- # val= 00:07:04.023 20:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:04.023 20:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:04.023 20:53:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.023 ************************************ 00:07:04.023 END TEST accel_copy 00:07:04.023 ************************************ 00:07:04.023 20:53:24 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:04.023 20:53:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.023 00:07:04.023 real 0m4.245s 00:07:04.023 user 0m3.779s 00:07:04.023 sys 0m0.260s 00:07:04.023 20:53:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.023 20:53:24 -- common/autotest_common.sh@10 -- # set +x 00:07:04.023 20:53:24 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.023 20:53:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:04.023 20:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.023 20:53:24 -- common/autotest_common.sh@10 -- # set +x 00:07:04.023 ************************************ 00:07:04.023 START TEST accel_fill 00:07:04.023 ************************************ 00:07:04.023 20:53:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.023 20:53:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.023 20:53:24 -- accel/accel.sh@17 -- # local accel_module 00:07:04.023 20:53:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.023 20:53:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.023 20:53:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.023 20:53:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.023 20:53:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.023 20:53:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.023 20:53:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.023 20:53:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.023 20:53:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.023 20:53:24 -- accel/accel.sh@42 -- # jq -r . 00:07:04.023 [2024-12-08 20:53:24.889827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.023 [2024-12-08 20:53:24.890043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59211 ] 00:07:04.023 [2024-12-08 20:53:25.059009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.293 [2024-12-08 20:53:25.202938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.250 20:53:26 -- accel/accel.sh@18 -- # out=' 00:07:06.250 SPDK Configuration: 00:07:06.250 Core mask: 0x1 00:07:06.250 00:07:06.250 Accel Perf Configuration: 00:07:06.250 Workload Type: fill 00:07:06.250 Fill pattern: 0x80 00:07:06.250 Transfer size: 4096 bytes 00:07:06.250 Vector count 1 00:07:06.250 Module: software 00:07:06.250 Queue depth: 64 00:07:06.250 Allocate depth: 64 00:07:06.250 # threads/core: 1 00:07:06.250 Run time: 1 seconds 00:07:06.250 Verify: Yes 00:07:06.250 00:07:06.250 Running for 1 seconds... 00:07:06.250 00:07:06.250 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.250 ------------------------------------------------------------------------------------ 00:07:06.250 0,0 475008/s 1855 MiB/s 0 0 00:07:06.250 ==================================================================================== 00:07:06.250 Total 475008/s 1855 MiB/s 0 0' 00:07:06.250 20:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.250 20:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.250 20:53:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.250 20:53:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.250 20:53:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.250 20:53:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.250 20:53:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.250 20:53:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.250 20:53:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.250 20:53:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.250 20:53:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.250 20:53:26 -- accel/accel.sh@42 -- # jq -r . 00:07:06.250 [2024-12-08 20:53:27.043978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.250 [2024-12-08 20:53:27.044147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59237 ] 00:07:06.250 [2024-12-08 20:53:27.209094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.508 [2024-12-08 20:53:27.352307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val= 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val= 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val=0x1 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val= 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val= 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val=fill 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val=0x80 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val= 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val=software 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val=64 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val=64 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val=1 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val=Yes 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val= 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:06.508 20:53:27 -- accel/accel.sh@21 -- # val= 00:07:06.508 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:06.508 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:08.406 20:53:29 -- accel/accel.sh@21 -- # val= 00:07:08.406 20:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.406 20:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.406 20:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.406 20:53:29 -- accel/accel.sh@21 -- # val= 00:07:08.406 20:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.406 20:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.406 20:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.407 20:53:29 -- accel/accel.sh@21 -- # val= 00:07:08.407 20:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.407 20:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.407 20:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.407 20:53:29 -- accel/accel.sh@21 -- # val= 00:07:08.407 20:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.407 20:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.407 20:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.407 20:53:29 -- accel/accel.sh@21 -- # val= 00:07:08.407 20:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.407 20:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.407 20:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.407 20:53:29 -- accel/accel.sh@21 -- # val= 00:07:08.407 20:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.407 20:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.407 20:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.407 20:53:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.407 20:53:29 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:08.407 20:53:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.407 00:07:08.407 real 0m4.315s 00:07:08.407 user 0m3.822s 00:07:08.407 sys 0m0.289s 00:07:08.407 20:53:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.407 ************************************ 00:07:08.407 20:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:08.407 END TEST accel_fill 00:07:08.407 ************************************ 00:07:08.407 20:53:29 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:08.407 20:53:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:08.407 20:53:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.407 20:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:08.407 ************************************ 00:07:08.407 START TEST accel_copy_crc32c 00:07:08.407 ************************************ 00:07:08.407 20:53:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:08.407 20:53:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.407 20:53:29 -- accel/accel.sh@17 -- # local accel_module 00:07:08.407 20:53:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:08.407 20:53:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:08.407 20:53:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.407 20:53:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.407 20:53:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.407 20:53:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.407 20:53:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.407 20:53:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.407 20:53:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.407 20:53:29 -- accel/accel.sh@42 -- # jq -r . 00:07:08.407 [2024-12-08 20:53:29.238617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.407 [2024-12-08 20:53:29.238764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59279 ] 00:07:08.407 [2024-12-08 20:53:29.397008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.667 [2024-12-08 20:53:29.538646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.590 20:53:31 -- accel/accel.sh@18 -- # out=' 00:07:10.590 SPDK Configuration: 00:07:10.590 Core mask: 0x1 00:07:10.590 00:07:10.590 Accel Perf Configuration: 00:07:10.590 Workload Type: copy_crc32c 00:07:10.590 CRC-32C seed: 0 00:07:10.590 Vector size: 4096 bytes 00:07:10.590 Transfer size: 4096 bytes 00:07:10.590 Vector count 1 00:07:10.590 Module: software 00:07:10.590 Queue depth: 32 00:07:10.590 Allocate depth: 32 00:07:10.590 # threads/core: 1 00:07:10.590 Run time: 1 seconds 00:07:10.590 Verify: Yes 00:07:10.590 00:07:10.590 Running for 1 seconds... 00:07:10.590 00:07:10.590 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.590 ------------------------------------------------------------------------------------ 00:07:10.590 0,0 252928/s 988 MiB/s 0 0 00:07:10.590 ==================================================================================== 00:07:10.590 Total 252928/s 988 MiB/s 0 0' 00:07:10.590 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.590 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.590 20:53:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:10.590 20:53:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:10.590 20:53:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.590 20:53:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.590 20:53:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.590 20:53:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.590 20:53:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.590 20:53:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.590 20:53:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.590 20:53:31 -- accel/accel.sh@42 -- # jq -r . 00:07:10.590 [2024-12-08 20:53:31.371589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.590 [2024-12-08 20:53:31.371759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59305 ] 00:07:10.590 [2024-12-08 20:53:31.538043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.850 [2024-12-08 20:53:31.683238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val= 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val= 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val=0x1 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val= 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val= 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val=0 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val= 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val=software 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val=32 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val=32 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val=1 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val=Yes 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val= 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.850 20:53:31 -- accel/accel.sh@21 -- # val= 00:07:10.850 20:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.850 20:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:12.756 20:53:33 -- accel/accel.sh@21 -- # val= 00:07:12.756 20:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:12.756 20:53:33 -- accel/accel.sh@21 -- # val= 00:07:12.756 20:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:12.756 20:53:33 -- accel/accel.sh@21 -- # val= 00:07:12.756 20:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:12.756 20:53:33 -- accel/accel.sh@21 -- # val= 00:07:12.756 20:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:12.756 20:53:33 -- accel/accel.sh@21 -- # val= 00:07:12.756 20:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:12.756 20:53:33 -- accel/accel.sh@21 -- # val= 00:07:12.756 20:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:12.756 20:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:12.756 20:53:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.756 20:53:33 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:12.756 20:53:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.756 00:07:12.756 real 0m4.285s 00:07:12.756 user 0m3.802s 00:07:12.756 sys 0m0.279s 00:07:12.756 20:53:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.756 ************************************ 00:07:12.756 END TEST accel_copy_crc32c 00:07:12.756 ************************************ 00:07:12.756 20:53:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.756 20:53:33 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:12.756 20:53:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:12.756 20:53:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.756 20:53:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.756 ************************************ 00:07:12.756 START TEST accel_copy_crc32c_C2 00:07:12.756 ************************************ 00:07:12.756 20:53:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:12.756 20:53:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.756 20:53:33 -- accel/accel.sh@17 -- # local accel_module 00:07:12.756 20:53:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:12.756 20:53:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:12.756 20:53:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.756 20:53:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.756 20:53:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.756 20:53:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.756 20:53:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.756 20:53:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.756 20:53:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.756 20:53:33 -- accel/accel.sh@42 -- # jq -r . 00:07:12.756 [2024-12-08 20:53:33.573412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.756 [2024-12-08 20:53:33.573567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59346 ] 00:07:12.756 [2024-12-08 20:53:33.742208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.015 [2024-12-08 20:53:33.892008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.919 20:53:35 -- accel/accel.sh@18 -- # out=' 00:07:14.920 SPDK Configuration: 00:07:14.920 Core mask: 0x1 00:07:14.920 00:07:14.920 Accel Perf Configuration: 00:07:14.920 Workload Type: copy_crc32c 00:07:14.920 CRC-32C seed: 0 00:07:14.920 Vector size: 4096 bytes 00:07:14.920 Transfer size: 8192 bytes 00:07:14.920 Vector count 2 00:07:14.920 Module: software 00:07:14.920 Queue depth: 32 00:07:14.920 Allocate depth: 32 00:07:14.920 # threads/core: 1 00:07:14.920 Run time: 1 seconds 00:07:14.920 Verify: Yes 00:07:14.920 00:07:14.920 Running for 1 seconds... 00:07:14.920 00:07:14.920 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.920 ------------------------------------------------------------------------------------ 00:07:14.920 0,0 180896/s 1413 MiB/s 0 0 00:07:14.920 ==================================================================================== 00:07:14.920 Total 180896/s 706 MiB/s 0 0' 00:07:14.920 20:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:14.920 20:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:14.920 20:53:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:14.920 20:53:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:14.920 20:53:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.920 20:53:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.920 20:53:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.920 20:53:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.920 20:53:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.920 20:53:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.920 20:53:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.920 20:53:35 -- accel/accel.sh@42 -- # jq -r . 00:07:14.920 [2024-12-08 20:53:35.721474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.920 [2024-12-08 20:53:35.721793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59372 ] 00:07:14.920 [2024-12-08 20:53:35.890820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.179 [2024-12-08 20:53:36.038659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val= 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val= 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val=0x1 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val= 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val= 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val=0 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val= 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val=software 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val=32 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val=32 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val=1 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val=Yes 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val= 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.179 20:53:36 -- accel/accel.sh@21 -- # val= 00:07:15.179 20:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.179 20:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.099 20:53:37 -- accel/accel.sh@21 -- # val= 00:07:17.099 20:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.099 20:53:37 -- accel/accel.sh@21 -- # val= 00:07:17.099 20:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.099 20:53:37 -- accel/accel.sh@21 -- # val= 00:07:17.099 20:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.099 20:53:37 -- accel/accel.sh@21 -- # val= 00:07:17.099 20:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.099 20:53:37 -- accel/accel.sh@21 -- # val= 00:07:17.099 20:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.099 20:53:37 -- accel/accel.sh@21 -- # val= 00:07:17.099 20:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.099 20:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.099 20:53:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.099 ************************************ 00:07:17.099 END TEST accel_copy_crc32c_C2 00:07:17.099 ************************************ 00:07:17.099 20:53:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:17.099 20:53:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.099 00:07:17.099 real 0m4.294s 00:07:17.099 user 0m3.816s 00:07:17.099 sys 0m0.273s 00:07:17.099 20:53:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.099 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:07:17.099 20:53:37 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:17.099 20:53:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:17.099 20:53:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.099 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:07:17.099 ************************************ 00:07:17.099 START TEST accel_dualcast 00:07:17.099 ************************************ 00:07:17.099 20:53:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:17.099 20:53:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.099 20:53:37 -- accel/accel.sh@17 -- # local accel_module 00:07:17.099 20:53:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:17.099 20:53:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:17.099 20:53:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.099 20:53:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.099 20:53:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.099 20:53:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.099 20:53:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.099 20:53:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.099 20:53:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.099 20:53:37 -- accel/accel.sh@42 -- # jq -r . 00:07:17.099 [2024-12-08 20:53:37.900008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.099 [2024-12-08 20:53:37.900160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59414 ] 00:07:17.099 [2024-12-08 20:53:38.050395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.358 [2024-12-08 20:53:38.193643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.260 20:53:39 -- accel/accel.sh@18 -- # out=' 00:07:19.260 SPDK Configuration: 00:07:19.260 Core mask: 0x1 00:07:19.260 00:07:19.260 Accel Perf Configuration: 00:07:19.260 Workload Type: dualcast 00:07:19.260 Transfer size: 4096 bytes 00:07:19.260 Vector count 1 00:07:19.260 Module: software 00:07:19.260 Queue depth: 32 00:07:19.260 Allocate depth: 32 00:07:19.260 # threads/core: 1 00:07:19.260 Run time: 1 seconds 00:07:19.260 Verify: Yes 00:07:19.260 00:07:19.260 Running for 1 seconds... 00:07:19.260 00:07:19.260 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.260 ------------------------------------------------------------------------------------ 00:07:19.260 0,0 343072/s 1340 MiB/s 0 0 00:07:19.260 ==================================================================================== 00:07:19.260 Total 343072/s 1340 MiB/s 0 0' 00:07:19.260 20:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:19.260 20:53:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:19.260 20:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:19.260 20:53:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:19.260 20:53:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.260 20:53:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.260 20:53:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.260 20:53:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.260 20:53:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.260 20:53:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.260 20:53:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.260 20:53:39 -- accel/accel.sh@42 -- # jq -r . 00:07:19.260 [2024-12-08 20:53:40.030230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.260 [2024-12-08 20:53:40.030401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59445 ] 00:07:19.260 [2024-12-08 20:53:40.194952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.519 [2024-12-08 20:53:40.337551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val= 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val= 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val=0x1 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val= 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val= 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val=dualcast 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val= 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val=software 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val=32 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val=32 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val=1 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val=Yes 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val= 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.519 20:53:40 -- accel/accel.sh@21 -- # val= 00:07:19.519 20:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.519 20:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 20:53:42 -- accel/accel.sh@21 -- # val= 00:07:21.422 20:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 20:53:42 -- accel/accel.sh@21 -- # val= 00:07:21.422 20:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 20:53:42 -- accel/accel.sh@21 -- # val= 00:07:21.422 20:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 20:53:42 -- accel/accel.sh@21 -- # val= 00:07:21.422 20:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 20:53:42 -- accel/accel.sh@21 -- # val= 00:07:21.422 20:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 20:53:42 -- accel/accel.sh@21 -- # val= 00:07:21.422 20:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:21.422 20:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:21.422 20:53:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.422 20:53:42 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:21.422 20:53:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.422 00:07:21.422 real 0m4.257s 00:07:21.422 user 0m3.804s 00:07:21.422 sys 0m0.252s 00:07:21.422 20:53:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.422 ************************************ 00:07:21.422 END TEST accel_dualcast 00:07:21.422 ************************************ 00:07:21.422 20:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:21.422 20:53:42 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:21.422 20:53:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:21.422 20:53:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.422 20:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:21.422 ************************************ 00:07:21.422 START TEST accel_compare 00:07:21.422 ************************************ 00:07:21.422 20:53:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:21.422 20:53:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.422 20:53:42 -- accel/accel.sh@17 -- # local accel_module 00:07:21.422 20:53:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:21.422 20:53:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:21.422 20:53:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.422 20:53:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.422 20:53:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.422 20:53:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.422 20:53:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.422 20:53:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.422 20:53:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.422 20:53:42 -- accel/accel.sh@42 -- # jq -r . 00:07:21.422 [2024-12-08 20:53:42.218988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.422 [2024-12-08 20:53:42.219170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59486 ] 00:07:21.422 [2024-12-08 20:53:42.385077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.681 [2024-12-08 20:53:42.527511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.586 20:53:44 -- accel/accel.sh@18 -- # out=' 00:07:23.586 SPDK Configuration: 00:07:23.586 Core mask: 0x1 00:07:23.586 00:07:23.586 Accel Perf Configuration: 00:07:23.586 Workload Type: compare 00:07:23.586 Transfer size: 4096 bytes 00:07:23.586 Vector count 1 00:07:23.586 Module: software 00:07:23.586 Queue depth: 32 00:07:23.586 Allocate depth: 32 00:07:23.586 # threads/core: 1 00:07:23.586 Run time: 1 seconds 00:07:23.586 Verify: Yes 00:07:23.586 00:07:23.586 Running for 1 seconds... 00:07:23.586 00:07:23.586 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.586 ------------------------------------------------------------------------------------ 00:07:23.586 0,0 462816/s 1807 MiB/s 0 0 00:07:23.586 ==================================================================================== 00:07:23.586 Total 462816/s 1807 MiB/s 0 0' 00:07:23.586 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.586 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.586 20:53:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:23.586 20:53:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:23.586 20:53:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.586 20:53:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.586 20:53:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.586 20:53:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.586 20:53:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.586 20:53:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.586 20:53:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.586 20:53:44 -- accel/accel.sh@42 -- # jq -r . 00:07:23.586 [2024-12-08 20:53:44.362383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.586 [2024-12-08 20:53:44.362539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59512 ] 00:07:23.586 [2024-12-08 20:53:44.530109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.846 [2024-12-08 20:53:44.674043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val= 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val= 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val=0x1 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val= 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val= 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val=compare 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val= 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val=software 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val=32 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val=32 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val=1 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val=Yes 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val= 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.846 20:53:44 -- accel/accel.sh@21 -- # val= 00:07:23.846 20:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.846 20:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:25.750 20:53:46 -- accel/accel.sh@21 -- # val= 00:07:25.750 20:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:25.750 20:53:46 -- accel/accel.sh@21 -- # val= 00:07:25.750 20:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:25.750 20:53:46 -- accel/accel.sh@21 -- # val= 00:07:25.750 20:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:25.750 20:53:46 -- accel/accel.sh@21 -- # val= 00:07:25.750 20:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:25.750 20:53:46 -- accel/accel.sh@21 -- # val= 00:07:25.750 20:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:25.750 20:53:46 -- accel/accel.sh@21 -- # val= 00:07:25.750 20:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:25.750 20:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:25.750 20:53:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.750 20:53:46 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:25.750 20:53:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.750 00:07:25.750 real 0m4.295s 00:07:25.750 user 0m3.806s 00:07:25.750 sys 0m0.279s 00:07:25.750 20:53:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.750 20:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.750 ************************************ 00:07:25.750 END TEST accel_compare 00:07:25.750 ************************************ 00:07:25.750 20:53:46 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:25.750 20:53:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:25.750 20:53:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.750 20:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.751 ************************************ 00:07:25.751 START TEST accel_xor 00:07:25.751 ************************************ 00:07:25.751 20:53:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:25.751 20:53:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.751 20:53:46 -- accel/accel.sh@17 -- # local accel_module 00:07:25.751 20:53:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:25.751 20:53:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:25.751 20:53:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.751 20:53:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.751 20:53:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.751 20:53:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.751 20:53:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.751 20:53:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.751 20:53:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.751 20:53:46 -- accel/accel.sh@42 -- # jq -r . 00:07:25.751 [2024-12-08 20:53:46.565365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.751 [2024-12-08 20:53:46.565523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59553 ] 00:07:25.751 [2024-12-08 20:53:46.733622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.010 [2024-12-08 20:53:46.878757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.917 20:53:48 -- accel/accel.sh@18 -- # out=' 00:07:27.917 SPDK Configuration: 00:07:27.917 Core mask: 0x1 00:07:27.917 00:07:27.917 Accel Perf Configuration: 00:07:27.917 Workload Type: xor 00:07:27.917 Source buffers: 2 00:07:27.917 Transfer size: 4096 bytes 00:07:27.917 Vector count 1 00:07:27.917 Module: software 00:07:27.917 Queue depth: 32 00:07:27.917 Allocate depth: 32 00:07:27.917 # threads/core: 1 00:07:27.917 Run time: 1 seconds 00:07:27.917 Verify: Yes 00:07:27.917 00:07:27.917 Running for 1 seconds... 00:07:27.917 00:07:27.917 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.917 ------------------------------------------------------------------------------------ 00:07:27.917 0,0 243168/s 949 MiB/s 0 0 00:07:27.917 ==================================================================================== 00:07:27.917 Total 243168/s 949 MiB/s 0 0' 00:07:27.917 20:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:27.917 20:53:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:27.917 20:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:27.917 20:53:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:27.917 20:53:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.917 20:53:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.917 20:53:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.917 20:53:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.917 20:53:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.917 20:53:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.917 20:53:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.917 20:53:48 -- accel/accel.sh@42 -- # jq -r . 00:07:27.917 [2024-12-08 20:53:48.708503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.917 [2024-12-08 20:53:48.708669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59579 ] 00:07:27.917 [2024-12-08 20:53:48.874705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.177 [2024-12-08 20:53:49.024497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val= 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val= 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val=0x1 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val= 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val= 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val=xor 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val=2 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val= 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val=software 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val=32 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val=32 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val=1 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val=Yes 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val= 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 20:53:49 -- accel/accel.sh@21 -- # val= 00:07:28.177 20:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 20:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.081 20:53:50 -- accel/accel.sh@21 -- # val= 00:07:30.081 20:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.081 20:53:50 -- accel/accel.sh@21 -- # val= 00:07:30.081 20:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.081 20:53:50 -- accel/accel.sh@21 -- # val= 00:07:30.081 20:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.081 20:53:50 -- accel/accel.sh@21 -- # val= 00:07:30.081 20:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.081 20:53:50 -- accel/accel.sh@21 -- # val= 00:07:30.081 20:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.081 20:53:50 -- accel/accel.sh@21 -- # val= 00:07:30.081 20:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.081 20:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.081 20:53:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.081 20:53:50 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:30.081 20:53:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.081 00:07:30.081 real 0m4.292s 00:07:30.081 user 0m3.803s 00:07:30.081 sys 0m0.282s 00:07:30.081 20:53:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.081 ************************************ 00:07:30.081 END TEST accel_xor 00:07:30.081 ************************************ 00:07:30.081 20:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:30.081 20:53:50 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:30.081 20:53:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:30.081 20:53:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.081 20:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:30.081 ************************************ 00:07:30.081 START TEST accel_xor 00:07:30.081 ************************************ 00:07:30.081 20:53:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:30.081 20:53:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.081 20:53:50 -- accel/accel.sh@17 -- # local accel_module 00:07:30.081 20:53:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:30.081 20:53:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:30.081 20:53:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.081 20:53:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.081 20:53:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.081 20:53:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.081 20:53:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.081 20:53:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.081 20:53:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.081 20:53:50 -- accel/accel.sh@42 -- # jq -r . 00:07:30.081 [2024-12-08 20:53:50.912109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.081 [2024-12-08 20:53:50.912304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59620 ] 00:07:30.081 [2024-12-08 20:53:51.079398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.339 [2024-12-08 20:53:51.233959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.244 20:53:53 -- accel/accel.sh@18 -- # out=' 00:07:32.244 SPDK Configuration: 00:07:32.244 Core mask: 0x1 00:07:32.244 00:07:32.244 Accel Perf Configuration: 00:07:32.244 Workload Type: xor 00:07:32.244 Source buffers: 3 00:07:32.244 Transfer size: 4096 bytes 00:07:32.244 Vector count 1 00:07:32.244 Module: software 00:07:32.244 Queue depth: 32 00:07:32.244 Allocate depth: 32 00:07:32.244 # threads/core: 1 00:07:32.244 Run time: 1 seconds 00:07:32.244 Verify: Yes 00:07:32.244 00:07:32.244 Running for 1 seconds... 00:07:32.244 00:07:32.244 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.244 ------------------------------------------------------------------------------------ 00:07:32.244 0,0 236928/s 925 MiB/s 0 0 00:07:32.244 ==================================================================================== 00:07:32.244 Total 236928/s 925 MiB/s 0 0' 00:07:32.244 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.244 20:53:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:32.244 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.244 20:53:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:32.244 20:53:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.244 20:53:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.244 20:53:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.244 20:53:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.244 20:53:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.244 20:53:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.244 20:53:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.244 20:53:53 -- accel/accel.sh@42 -- # jq -r . 00:07:32.244 [2024-12-08 20:53:53.094446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.244 [2024-12-08 20:53:53.094664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59646 ] 00:07:32.244 [2024-12-08 20:53:53.263675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.503 [2024-12-08 20:53:53.412852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val= 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val= 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val=0x1 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val= 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val= 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val=xor 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val=3 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val= 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val=software 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val=32 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val=32 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val=1 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val=Yes 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val= 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:32.762 20:53:53 -- accel/accel.sh@21 -- # val= 00:07:32.762 20:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:32.762 20:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.138 20:53:55 -- accel/accel.sh@21 -- # val= 00:07:34.138 20:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.138 20:53:55 -- accel/accel.sh@21 -- # val= 00:07:34.138 20:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.138 20:53:55 -- accel/accel.sh@21 -- # val= 00:07:34.138 20:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.138 20:53:55 -- accel/accel.sh@21 -- # val= 00:07:34.138 20:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.138 20:53:55 -- accel/accel.sh@21 -- # val= 00:07:34.138 20:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.138 20:53:55 -- accel/accel.sh@21 -- # val= 00:07:34.138 20:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.138 20:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.396 20:53:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.396 20:53:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:34.396 20:53:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.396 00:07:34.396 real 0m4.328s 00:07:34.396 user 0m3.837s 00:07:34.396 sys 0m0.288s 00:07:34.396 20:53:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.396 ************************************ 00:07:34.396 END TEST accel_xor 00:07:34.396 ************************************ 00:07:34.396 20:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:34.396 20:53:55 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:34.396 20:53:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:34.396 20:53:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.396 20:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:34.396 ************************************ 00:07:34.396 START TEST accel_dif_verify 00:07:34.396 ************************************ 00:07:34.396 20:53:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:34.396 20:53:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.396 20:53:55 -- accel/accel.sh@17 -- # local accel_module 00:07:34.396 20:53:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:34.396 20:53:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:34.396 20:53:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.396 20:53:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.396 20:53:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.396 20:53:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.396 20:53:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.396 20:53:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.396 20:53:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.396 20:53:55 -- accel/accel.sh@42 -- # jq -r . 00:07:34.396 [2024-12-08 20:53:55.276661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.396 [2024-12-08 20:53:55.276811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ] 00:07:34.655 [2024-12-08 20:53:55.444179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.655 [2024-12-08 20:53:55.588120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.554 20:53:57 -- accel/accel.sh@18 -- # out=' 00:07:36.554 SPDK Configuration: 00:07:36.554 Core mask: 0x1 00:07:36.554 00:07:36.554 Accel Perf Configuration: 00:07:36.554 Workload Type: dif_verify 00:07:36.554 Vector size: 4096 bytes 00:07:36.554 Transfer size: 4096 bytes 00:07:36.554 Block size: 512 bytes 00:07:36.554 Metadata size: 8 bytes 00:07:36.554 Vector count 1 00:07:36.554 Module: software 00:07:36.554 Queue depth: 32 00:07:36.554 Allocate depth: 32 00:07:36.554 # threads/core: 1 00:07:36.554 Run time: 1 seconds 00:07:36.554 Verify: No 00:07:36.554 00:07:36.554 Running for 1 seconds... 00:07:36.554 00:07:36.554 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.554 ------------------------------------------------------------------------------------ 00:07:36.554 0,0 115008/s 456 MiB/s 0 0 00:07:36.554 ==================================================================================== 00:07:36.554 Total 115008/s 449 MiB/s 0 0' 00:07:36.554 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:36.554 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:36.554 20:53:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:36.554 20:53:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:36.554 20:53:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.554 20:53:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.554 20:53:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.554 20:53:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.554 20:53:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.554 20:53:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.554 20:53:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.554 20:53:57 -- accel/accel.sh@42 -- # jq -r . 00:07:36.554 [2024-12-08 20:53:57.415316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.554 [2024-12-08 20:53:57.415469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59722 ] 00:07:36.554 [2024-12-08 20:53:57.581278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.813 [2024-12-08 20:53:57.725143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val= 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val= 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val=0x1 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val= 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val= 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val=dif_verify 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val= 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val=software 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val=32 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val=32 00:07:37.072 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.072 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.072 20:53:57 -- accel/accel.sh@21 -- # val=1 00:07:37.073 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.073 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.073 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.073 20:53:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.073 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.073 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.073 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.073 20:53:57 -- accel/accel.sh@21 -- # val=No 00:07:37.073 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.073 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.073 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.073 20:53:57 -- accel/accel.sh@21 -- # val= 00:07:37.073 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.073 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.073 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.073 20:53:57 -- accel/accel.sh@21 -- # val= 00:07:37.073 20:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.073 20:53:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.073 20:53:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.450 20:53:59 -- accel/accel.sh@21 -- # val= 00:07:38.450 20:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:38.450 20:53:59 -- accel/accel.sh@21 -- # val= 00:07:38.450 20:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:38.450 20:53:59 -- accel/accel.sh@21 -- # val= 00:07:38.450 20:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:38.450 20:53:59 -- accel/accel.sh@21 -- # val= 00:07:38.450 20:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:38.450 20:53:59 -- accel/accel.sh@21 -- # val= 00:07:38.450 20:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:38.450 20:53:59 -- accel/accel.sh@21 -- # val= 00:07:38.450 20:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.450 20:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:38.710 20:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:38.710 20:53:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.710 20:53:59 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:38.710 20:53:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.710 00:07:38.710 real 0m4.273s 00:07:38.710 user 0m3.792s 00:07:38.710 sys 0m0.282s 00:07:38.710 20:53:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.710 20:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:38.710 ************************************ 00:07:38.710 END TEST accel_dif_verify 00:07:38.710 ************************************ 00:07:38.710 20:53:59 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:38.710 20:53:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:38.710 20:53:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.710 20:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:38.710 ************************************ 00:07:38.710 START TEST accel_dif_generate 00:07:38.710 ************************************ 00:07:38.710 20:53:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:38.710 20:53:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.710 20:53:59 -- accel/accel.sh@17 -- # local accel_module 00:07:38.710 20:53:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:38.710 20:53:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:38.710 20:53:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.710 20:53:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.710 20:53:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.710 20:53:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.710 20:53:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.710 20:53:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.710 20:53:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.710 20:53:59 -- accel/accel.sh@42 -- # jq -r . 00:07:38.710 [2024-12-08 20:53:59.609922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.710 [2024-12-08 20:53:59.610097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59763 ] 00:07:38.976 [2024-12-08 20:53:59.778668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.976 [2024-12-08 20:53:59.925314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.956 20:54:01 -- accel/accel.sh@18 -- # out=' 00:07:40.956 SPDK Configuration: 00:07:40.956 Core mask: 0x1 00:07:40.956 00:07:40.956 Accel Perf Configuration: 00:07:40.956 Workload Type: dif_generate 00:07:40.956 Vector size: 4096 bytes 00:07:40.956 Transfer size: 4096 bytes 00:07:40.956 Block size: 512 bytes 00:07:40.956 Metadata size: 8 bytes 00:07:40.956 Vector count 1 00:07:40.956 Module: software 00:07:40.956 Queue depth: 32 00:07:40.956 Allocate depth: 32 00:07:40.956 # threads/core: 1 00:07:40.956 Run time: 1 seconds 00:07:40.956 Verify: No 00:07:40.956 00:07:40.956 Running for 1 seconds... 00:07:40.956 00:07:40.956 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.956 ------------------------------------------------------------------------------------ 00:07:40.956 0,0 130368/s 517 MiB/s 0 0 00:07:40.956 ==================================================================================== 00:07:40.956 Total 130368/s 509 MiB/s 0 0' 00:07:40.956 20:54:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.956 20:54:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:40.956 20:54:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.956 20:54:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:40.956 20:54:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.956 20:54:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.956 20:54:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.956 20:54:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.956 20:54:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.956 20:54:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.956 20:54:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.956 20:54:01 -- accel/accel.sh@42 -- # jq -r . 00:07:40.956 [2024-12-08 20:54:01.786352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.956 [2024-12-08 20:54:01.786501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59793 ] 00:07:40.956 [2024-12-08 20:54:01.954638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.215 [2024-12-08 20:54:02.108424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val= 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val= 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val=0x1 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val= 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val= 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val=dif_generate 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val= 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val=software 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val=32 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val=32 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val=1 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.474 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.474 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.474 20:54:02 -- accel/accel.sh@21 -- # val=No 00:07:41.475 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.475 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.475 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.475 20:54:02 -- accel/accel.sh@21 -- # val= 00:07:41.475 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.475 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.475 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.475 20:54:02 -- accel/accel.sh@21 -- # val= 00:07:41.475 20:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.475 20:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.475 20:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:43.379 20:54:03 -- accel/accel.sh@21 -- # val= 00:07:43.379 20:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.379 20:54:03 -- accel/accel.sh@21 -- # val= 00:07:43.379 20:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.379 20:54:03 -- accel/accel.sh@21 -- # val= 00:07:43.379 20:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.379 20:54:03 -- accel/accel.sh@21 -- # val= 00:07:43.379 20:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.379 20:54:03 -- accel/accel.sh@21 -- # val= 00:07:43.379 20:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.379 20:54:03 -- accel/accel.sh@21 -- # val= 00:07:43.379 20:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.379 20:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.379 ************************************ 00:07:43.379 END TEST accel_dif_generate 00:07:43.379 ************************************ 00:07:43.379 20:54:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.379 20:54:03 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:43.379 20:54:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.379 00:07:43.379 real 0m4.374s 00:07:43.379 user 0m3.871s 00:07:43.379 sys 0m0.298s 00:07:43.379 20:54:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.379 20:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:43.379 20:54:03 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:43.379 20:54:03 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:43.379 20:54:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.379 20:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:43.379 ************************************ 00:07:43.379 START TEST accel_dif_generate_copy 00:07:43.379 ************************************ 00:07:43.379 20:54:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:43.379 20:54:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.379 20:54:03 -- accel/accel.sh@17 -- # local accel_module 00:07:43.380 20:54:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:43.380 20:54:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:43.380 20:54:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.380 20:54:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.380 20:54:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.380 20:54:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.380 20:54:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.380 20:54:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.380 20:54:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.380 20:54:03 -- accel/accel.sh@42 -- # jq -r . 00:07:43.380 [2024-12-08 20:54:04.035014] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.380 [2024-12-08 20:54:04.035204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59835 ] 00:07:43.380 [2024-12-08 20:54:04.201840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.380 [2024-12-08 20:54:04.344257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.282 20:54:06 -- accel/accel.sh@18 -- # out=' 00:07:45.282 SPDK Configuration: 00:07:45.282 Core mask: 0x1 00:07:45.282 00:07:45.282 Accel Perf Configuration: 00:07:45.282 Workload Type: dif_generate_copy 00:07:45.282 Vector size: 4096 bytes 00:07:45.282 Transfer size: 4096 bytes 00:07:45.282 Vector count 1 00:07:45.282 Module: software 00:07:45.282 Queue depth: 32 00:07:45.282 Allocate depth: 32 00:07:45.282 # threads/core: 1 00:07:45.282 Run time: 1 seconds 00:07:45.282 Verify: No 00:07:45.282 00:07:45.282 Running for 1 seconds... 00:07:45.282 00:07:45.282 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.282 ------------------------------------------------------------------------------------ 00:07:45.282 0,0 101536/s 402 MiB/s 0 0 00:07:45.282 ==================================================================================== 00:07:45.282 Total 101536/s 396 MiB/s 0 0' 00:07:45.282 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.282 20:54:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:45.282 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.282 20:54:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:45.282 20:54:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.282 20:54:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.282 20:54:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.282 20:54:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.282 20:54:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.282 20:54:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.282 20:54:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.282 20:54:06 -- accel/accel.sh@42 -- # jq -r . 00:07:45.282 [2024-12-08 20:54:06.182802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.282 [2024-12-08 20:54:06.182959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59861 ] 00:07:45.541 [2024-12-08 20:54:06.349228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.541 [2024-12-08 20:54:06.496538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val= 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val= 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val=0x1 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val= 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val= 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val= 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val=software 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val=32 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val=32 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val=1 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val=No 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val= 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.801 20:54:06 -- accel/accel.sh@21 -- # val= 00:07:45.801 20:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:45.801 20:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.707 20:54:08 -- accel/accel.sh@21 -- # val= 00:07:47.707 20:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.707 20:54:08 -- accel/accel.sh@21 -- # val= 00:07:47.707 20:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.707 20:54:08 -- accel/accel.sh@21 -- # val= 00:07:47.707 20:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.707 20:54:08 -- accel/accel.sh@21 -- # val= 00:07:47.707 20:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.707 20:54:08 -- accel/accel.sh@21 -- # val= 00:07:47.707 20:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.707 20:54:08 -- accel/accel.sh@21 -- # val= 00:07:47.707 20:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.707 20:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.707 20:54:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.707 20:54:08 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:47.707 20:54:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.707 00:07:47.707 real 0m4.286s 00:07:47.707 user 0m3.822s 00:07:47.707 sys 0m0.259s 00:07:47.707 20:54:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.707 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:07:47.707 ************************************ 00:07:47.707 END TEST accel_dif_generate_copy 00:07:47.707 ************************************ 00:07:47.707 20:54:08 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:47.707 20:54:08 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.707 20:54:08 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:47.707 20:54:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.707 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:07:47.707 ************************************ 00:07:47.707 START TEST accel_comp 00:07:47.707 ************************************ 00:07:47.707 20:54:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.707 20:54:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.707 20:54:08 -- accel/accel.sh@17 -- # local accel_module 00:07:47.707 20:54:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.707 20:54:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.707 20:54:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.707 20:54:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.707 20:54:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.707 20:54:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.707 20:54:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.707 20:54:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.707 20:54:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.707 20:54:08 -- accel/accel.sh@42 -- # jq -r . 00:07:47.707 [2024-12-08 20:54:08.367923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.707 [2024-12-08 20:54:08.368107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59902 ] 00:07:47.707 [2024-12-08 20:54:08.533961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.707 [2024-12-08 20:54:08.677542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.608 20:54:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:49.608 00:07:49.608 SPDK Configuration: 00:07:49.608 Core mask: 0x1 00:07:49.608 00:07:49.609 Accel Perf Configuration: 00:07:49.609 Workload Type: compress 00:07:49.609 Transfer size: 4096 bytes 00:07:49.609 Vector count 1 00:07:49.609 Module: software 00:07:49.609 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.609 Queue depth: 32 00:07:49.609 Allocate depth: 32 00:07:49.609 # threads/core: 1 00:07:49.609 Run time: 1 seconds 00:07:49.609 Verify: No 00:07:49.609 00:07:49.609 Running for 1 seconds... 00:07:49.609 00:07:49.609 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:49.609 ------------------------------------------------------------------------------------ 00:07:49.609 0,0 58624/s 244 MiB/s 0 0 00:07:49.609 ==================================================================================== 00:07:49.609 Total 58624/s 229 MiB/s 0 0' 00:07:49.609 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:49.609 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:49.609 20:54:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.609 20:54:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.609 20:54:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.609 20:54:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.609 20:54:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.609 20:54:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.609 20:54:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.609 20:54:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.609 20:54:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.609 20:54:10 -- accel/accel.sh@42 -- # jq -r . 00:07:49.609 [2024-12-08 20:54:10.514977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.609 [2024-12-08 20:54:10.515155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59928 ] 00:07:49.867 [2024-12-08 20:54:10.682423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.867 [2024-12-08 20:54:10.828529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val= 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val= 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val= 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val=0x1 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val= 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val= 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val=compress 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val= 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val=software 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val=32 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.124 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.124 20:54:10 -- accel/accel.sh@21 -- # val=32 00:07:50.124 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.125 20:54:10 -- accel/accel.sh@21 -- # val=1 00:07:50.125 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.125 20:54:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.125 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.125 20:54:10 -- accel/accel.sh@21 -- # val=No 00:07:50.125 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.125 20:54:10 -- accel/accel.sh@21 -- # val= 00:07:50.125 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:50.125 20:54:10 -- accel/accel.sh@21 -- # val= 00:07:50.125 20:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:50.125 20:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:52.025 20:54:12 -- accel/accel.sh@21 -- # val= 00:07:52.025 20:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.025 20:54:12 -- accel/accel.sh@21 -- # val= 00:07:52.025 20:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.025 20:54:12 -- accel/accel.sh@21 -- # val= 00:07:52.025 20:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.025 20:54:12 -- accel/accel.sh@21 -- # val= 00:07:52.025 20:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.025 20:54:12 -- accel/accel.sh@21 -- # val= 00:07:52.025 20:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.025 20:54:12 -- accel/accel.sh@21 -- # val= 00:07:52.025 20:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.025 20:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.025 20:54:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.025 20:54:12 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:52.025 20:54:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.025 00:07:52.025 real 0m4.299s 00:07:52.025 user 0m3.813s 00:07:52.025 sys 0m0.278s 00:07:52.025 20:54:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:52.025 20:54:12 -- common/autotest_common.sh@10 -- # set +x 00:07:52.025 ************************************ 00:07:52.025 END TEST accel_comp 00:07:52.025 ************************************ 00:07:52.025 20:54:12 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:52.025 20:54:12 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:52.025 20:54:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.025 20:54:12 -- common/autotest_common.sh@10 -- # set +x 00:07:52.025 ************************************ 00:07:52.025 START TEST accel_decomp 00:07:52.025 ************************************ 00:07:52.025 20:54:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:52.025 20:54:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.025 20:54:12 -- accel/accel.sh@17 -- # local accel_module 00:07:52.025 20:54:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:52.025 20:54:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:52.025 20:54:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.025 20:54:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.025 20:54:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.025 20:54:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.025 20:54:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.025 20:54:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.025 20:54:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.025 20:54:12 -- accel/accel.sh@42 -- # jq -r . 00:07:52.025 [2024-12-08 20:54:12.714536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:52.025 [2024-12-08 20:54:12.714687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59975 ] 00:07:52.026 [2024-12-08 20:54:12.881489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.026 [2024-12-08 20:54:13.022720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.929 20:54:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:53.929 00:07:53.929 SPDK Configuration: 00:07:53.929 Core mask: 0x1 00:07:53.929 00:07:53.929 Accel Perf Configuration: 00:07:53.929 Workload Type: decompress 00:07:53.929 Transfer size: 4096 bytes 00:07:53.929 Vector count 1 00:07:53.929 Module: software 00:07:53.929 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:53.929 Queue depth: 32 00:07:53.929 Allocate depth: 32 00:07:53.929 # threads/core: 1 00:07:53.929 Run time: 1 seconds 00:07:53.929 Verify: Yes 00:07:53.929 00:07:53.929 Running for 1 seconds... 00:07:53.929 00:07:53.929 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.929 ------------------------------------------------------------------------------------ 00:07:53.929 0,0 74496/s 137 MiB/s 0 0 00:07:53.929 ==================================================================================== 00:07:53.929 Total 74496/s 291 MiB/s 0 0' 00:07:53.929 20:54:14 -- accel/accel.sh@20 -- # IFS=: 00:07:53.929 20:54:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:53.929 20:54:14 -- accel/accel.sh@20 -- # read -r var val 00:07:53.929 20:54:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:53.929 20:54:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.929 20:54:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.929 20:54:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.929 20:54:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.929 20:54:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.929 20:54:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.929 20:54:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.929 20:54:14 -- accel/accel.sh@42 -- # jq -r . 00:07:53.929 [2024-12-08 20:54:14.849502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.929 [2024-12-08 20:54:14.849652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60001 ] 00:07:54.189 [2024-12-08 20:54:15.018068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.189 [2024-12-08 20:54:15.161769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val= 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val= 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val= 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val=0x1 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val= 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val= 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val=decompress 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val= 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val=software 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val=32 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val=32 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val=1 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val=Yes 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val= 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.449 20:54:15 -- accel/accel.sh@21 -- # val= 00:07:54.449 20:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.449 20:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:56.354 20:54:16 -- accel/accel.sh@21 -- # val= 00:07:56.354 20:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.354 20:54:16 -- accel/accel.sh@21 -- # val= 00:07:56.354 20:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.354 20:54:16 -- accel/accel.sh@21 -- # val= 00:07:56.354 20:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.354 20:54:16 -- accel/accel.sh@21 -- # val= 00:07:56.354 20:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.354 20:54:16 -- accel/accel.sh@21 -- # val= 00:07:56.354 20:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.354 20:54:16 -- accel/accel.sh@21 -- # val= 00:07:56.354 20:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.354 20:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.354 20:54:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:56.354 20:54:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:56.354 20:54:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.354 00:07:56.354 real 0m4.285s 00:07:56.354 user 0m3.816s 00:07:56.354 sys 0m0.267s 00:07:56.354 20:54:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.354 20:54:16 -- common/autotest_common.sh@10 -- # set +x 00:07:56.354 ************************************ 00:07:56.354 END TEST accel_decomp 00:07:56.354 ************************************ 00:07:56.354 20:54:16 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.354 20:54:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:56.354 20:54:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.354 20:54:16 -- common/autotest_common.sh@10 -- # set +x 00:07:56.354 ************************************ 00:07:56.354 START TEST accel_decmop_full 00:07:56.354 ************************************ 00:07:56.354 20:54:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.354 20:54:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.354 20:54:17 -- accel/accel.sh@17 -- # local accel_module 00:07:56.354 20:54:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.354 20:54:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.354 20:54:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.354 20:54:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.354 20:54:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.354 20:54:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.354 20:54:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.354 20:54:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.354 20:54:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.354 20:54:17 -- accel/accel.sh@42 -- # jq -r . 00:07:56.354 [2024-12-08 20:54:17.050597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.355 [2024-12-08 20:54:17.050777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60042 ] 00:07:56.355 [2024-12-08 20:54:17.215595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.355 [2024-12-08 20:54:17.357030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.258 20:54:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:58.258 00:07:58.258 SPDK Configuration: 00:07:58.258 Core mask: 0x1 00:07:58.258 00:07:58.258 Accel Perf Configuration: 00:07:58.258 Workload Type: decompress 00:07:58.258 Transfer size: 111250 bytes 00:07:58.258 Vector count 1 00:07:58.258 Module: software 00:07:58.258 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.258 Queue depth: 32 00:07:58.258 Allocate depth: 32 00:07:58.258 # threads/core: 1 00:07:58.258 Run time: 1 seconds 00:07:58.258 Verify: Yes 00:07:58.258 00:07:58.258 Running for 1 seconds... 00:07:58.258 00:07:58.258 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:58.258 ------------------------------------------------------------------------------------ 00:07:58.258 0,0 5504/s 227 MiB/s 0 0 00:07:58.258 ==================================================================================== 00:07:58.258 Total 5504/s 583 MiB/s 0 0' 00:07:58.258 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.258 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.258 20:54:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:58.258 20:54:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:58.258 20:54:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.258 20:54:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.258 20:54:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.258 20:54:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.258 20:54:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.258 20:54:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.258 20:54:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.258 20:54:19 -- accel/accel.sh@42 -- # jq -r . 00:07:58.258 [2024-12-08 20:54:19.203182] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.258 [2024-12-08 20:54:19.203497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60068 ] 00:07:58.517 [2024-12-08 20:54:19.370989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.517 [2024-12-08 20:54:19.514415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val= 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val= 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val= 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val=0x1 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val= 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val= 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val=decompress 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val= 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val=software 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val=32 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val=32 00:07:58.776 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.776 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.776 20:54:19 -- accel/accel.sh@21 -- # val=1 00:07:58.777 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.777 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.777 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.777 20:54:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:58.777 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.777 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.777 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.777 20:54:19 -- accel/accel.sh@21 -- # val=Yes 00:07:58.777 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.777 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.777 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.777 20:54:19 -- accel/accel.sh@21 -- # val= 00:07:58.777 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.777 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.777 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.777 20:54:19 -- accel/accel.sh@21 -- # val= 00:07:58.777 20:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.777 20:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.777 20:54:19 -- accel/accel.sh@20 -- # read -r var val 00:08:00.682 20:54:21 -- accel/accel.sh@21 -- # val= 00:08:00.682 20:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:00.682 20:54:21 -- accel/accel.sh@21 -- # val= 00:08:00.682 20:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:00.682 20:54:21 -- accel/accel.sh@21 -- # val= 00:08:00.682 20:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:00.682 20:54:21 -- accel/accel.sh@21 -- # val= 00:08:00.682 20:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:00.682 20:54:21 -- accel/accel.sh@21 -- # val= 00:08:00.682 20:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:00.682 20:54:21 -- accel/accel.sh@21 -- # val= 00:08:00.682 20:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:00.682 20:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:00.682 ************************************ 00:08:00.682 END TEST accel_decmop_full 00:08:00.682 ************************************ 00:08:00.682 20:54:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:00.682 20:54:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:00.682 20:54:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.682 00:08:00.682 real 0m4.307s 00:08:00.682 user 0m3.838s 00:08:00.682 sys 0m0.264s 00:08:00.682 20:54:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.682 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:08:00.682 20:54:21 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:00.682 20:54:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:00.682 20:54:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.682 20:54:21 -- common/autotest_common.sh@10 -- # set +x 00:08:00.682 ************************************ 00:08:00.682 START TEST accel_decomp_mcore 00:08:00.682 ************************************ 00:08:00.682 20:54:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:00.682 20:54:21 -- accel/accel.sh@16 -- # local accel_opc 00:08:00.682 20:54:21 -- accel/accel.sh@17 -- # local accel_module 00:08:00.682 20:54:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:00.682 20:54:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:00.682 20:54:21 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.682 20:54:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:00.682 20:54:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.682 20:54:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.682 20:54:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:00.682 20:54:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:00.682 20:54:21 -- accel/accel.sh@41 -- # local IFS=, 00:08:00.682 20:54:21 -- accel/accel.sh@42 -- # jq -r . 00:08:00.682 [2024-12-08 20:54:21.407562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.682 [2024-12-08 20:54:21.407719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60109 ] 00:08:00.682 [2024-12-08 20:54:21.575320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.682 [2024-12-08 20:54:21.722513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.682 [2024-12-08 20:54:21.722654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.682 [2024-12-08 20:54:21.722782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.682 [2024-12-08 20:54:21.722996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.588 20:54:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:02.588 00:08:02.588 SPDK Configuration: 00:08:02.588 Core mask: 0xf 00:08:02.588 00:08:02.588 Accel Perf Configuration: 00:08:02.588 Workload Type: decompress 00:08:02.588 Transfer size: 4096 bytes 00:08:02.588 Vector count 1 00:08:02.588 Module: software 00:08:02.588 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:02.588 Queue depth: 32 00:08:02.588 Allocate depth: 32 00:08:02.588 # threads/core: 1 00:08:02.588 Run time: 1 seconds 00:08:02.588 Verify: Yes 00:08:02.588 00:08:02.588 Running for 1 seconds... 00:08:02.588 00:08:02.588 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:02.588 ------------------------------------------------------------------------------------ 00:08:02.588 0,0 61920/s 114 MiB/s 0 0 00:08:02.588 3,0 61600/s 113 MiB/s 0 0 00:08:02.588 2,0 62176/s 114 MiB/s 0 0 00:08:02.588 1,0 61408/s 113 MiB/s 0 0 00:08:02.588 ==================================================================================== 00:08:02.588 Total 247104/s 965 MiB/s 0 0' 00:08:02.588 20:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:02.588 20:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:02.588 20:54:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:02.588 20:54:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:02.588 20:54:23 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.588 20:54:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.588 20:54:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.588 20:54:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.588 20:54:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.588 20:54:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.588 20:54:23 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.588 20:54:23 -- accel/accel.sh@42 -- # jq -r . 00:08:02.588 [2024-12-08 20:54:23.620065] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.588 [2024-12-08 20:54:23.620217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60138 ] 00:08:02.847 [2024-12-08 20:54:23.773356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.106 [2024-12-08 20:54:23.923508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.106 [2024-12-08 20:54:23.923619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.106 [2024-12-08 20:54:23.923739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.106 [2024-12-08 20:54:23.923965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val= 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val= 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val= 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val=0xf 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val= 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val= 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val=decompress 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val= 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val=software 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@23 -- # accel_module=software 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val=32 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val=32 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val=1 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val=Yes 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val= 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.106 20:54:24 -- accel/accel.sh@21 -- # val= 00:08:03.106 20:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.106 20:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.007 20:54:25 -- accel/accel.sh@21 -- # val= 00:08:05.007 20:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:05.007 20:54:25 -- accel/accel.sh@21 -- # val= 00:08:05.007 20:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:05.007 20:54:25 -- accel/accel.sh@21 -- # val= 00:08:05.007 20:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:05.007 20:54:25 -- accel/accel.sh@21 -- # val= 00:08:05.007 20:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:05.007 20:54:25 -- accel/accel.sh@21 -- # val= 00:08:05.007 20:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:05.007 20:54:25 -- accel/accel.sh@21 -- # val= 00:08:05.007 20:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:05.007 20:54:25 -- accel/accel.sh@21 -- # val= 00:08:05.007 20:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:05.007 20:54:25 -- accel/accel.sh@21 -- # val= 00:08:05.007 20:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:05.007 20:54:25 -- accel/accel.sh@21 -- # val= 00:08:05.007 20:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:05.007 20:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:05.007 20:54:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:05.007 20:54:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:05.007 20:54:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.007 00:08:05.007 real 0m4.416s 00:08:05.007 user 0m13.258s 00:08:05.007 sys 0m0.319s 00:08:05.007 20:54:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.007 20:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:05.008 ************************************ 00:08:05.008 END TEST accel_decomp_mcore 00:08:05.008 ************************************ 00:08:05.008 20:54:25 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.008 20:54:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:05.008 20:54:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.008 20:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:05.008 ************************************ 00:08:05.008 START TEST accel_decomp_full_mcore 00:08:05.008 ************************************ 00:08:05.008 20:54:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.008 20:54:25 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.008 20:54:25 -- accel/accel.sh@17 -- # local accel_module 00:08:05.008 20:54:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.008 20:54:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.008 20:54:25 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.008 20:54:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.008 20:54:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.008 20:54:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.008 20:54:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.008 20:54:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.008 20:54:25 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.008 20:54:25 -- accel/accel.sh@42 -- # jq -r . 00:08:05.008 [2024-12-08 20:54:25.856726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.008 [2024-12-08 20:54:25.857151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60187 ] 00:08:05.008 [2024-12-08 20:54:26.008496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.265 [2024-12-08 20:54:26.160604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.265 [2024-12-08 20:54:26.160754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.265 [2024-12-08 20:54:26.160869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.265 [2024-12-08 20:54:26.160951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.162 20:54:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:07.162 00:08:07.162 SPDK Configuration: 00:08:07.162 Core mask: 0xf 00:08:07.162 00:08:07.162 Accel Perf Configuration: 00:08:07.162 Workload Type: decompress 00:08:07.162 Transfer size: 111250 bytes 00:08:07.162 Vector count 1 00:08:07.162 Module: software 00:08:07.162 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.162 Queue depth: 32 00:08:07.162 Allocate depth: 32 00:08:07.162 # threads/core: 1 00:08:07.162 Run time: 1 seconds 00:08:07.162 Verify: Yes 00:08:07.162 00:08:07.162 Running for 1 seconds... 00:08:07.162 00:08:07.162 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:07.162 ------------------------------------------------------------------------------------ 00:08:07.162 0,0 4928/s 203 MiB/s 0 0 00:08:07.162 3,0 4928/s 203 MiB/s 0 0 00:08:07.162 2,0 4864/s 200 MiB/s 0 0 00:08:07.162 1,0 4896/s 202 MiB/s 0 0 00:08:07.162 ==================================================================================== 00:08:07.162 Total 19616/s 2081 MiB/s 0 0' 00:08:07.162 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.162 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.162 20:54:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:07.162 20:54:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:07.162 20:54:28 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.162 20:54:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.162 20:54:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.162 20:54:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.162 20:54:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.162 20:54:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.162 20:54:28 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.162 20:54:28 -- accel/accel.sh@42 -- # jq -r . 00:08:07.162 [2024-12-08 20:54:28.074659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.162 [2024-12-08 20:54:28.075281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60217 ] 00:08:07.421 [2024-12-08 20:54:28.227684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.421 [2024-12-08 20:54:28.377745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.421 [2024-12-08 20:54:28.377856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.421 [2024-12-08 20:54:28.377978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.421 [2024-12-08 20:54:28.378186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val= 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val= 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val= 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val=0xf 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val= 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val= 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val=decompress 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val= 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val=software 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@23 -- # accel_module=software 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val=32 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val=32 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val=1 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.680 20:54:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:07.680 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.680 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.681 20:54:28 -- accel/accel.sh@21 -- # val=Yes 00:08:07.681 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.681 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.681 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.681 20:54:28 -- accel/accel.sh@21 -- # val= 00:08:07.681 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.681 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.681 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.681 20:54:28 -- accel/accel.sh@21 -- # val= 00:08:07.681 20:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.681 20:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:07.681 20:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.585 20:54:30 -- accel/accel.sh@21 -- # val= 00:08:09.585 20:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # IFS=: 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # read -r var val 00:08:09.585 20:54:30 -- accel/accel.sh@21 -- # val= 00:08:09.585 20:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # IFS=: 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # read -r var val 00:08:09.585 20:54:30 -- accel/accel.sh@21 -- # val= 00:08:09.585 20:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # IFS=: 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # read -r var val 00:08:09.585 20:54:30 -- accel/accel.sh@21 -- # val= 00:08:09.585 20:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # IFS=: 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # read -r var val 00:08:09.585 20:54:30 -- accel/accel.sh@21 -- # val= 00:08:09.585 20:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # IFS=: 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # read -r var val 00:08:09.585 20:54:30 -- accel/accel.sh@21 -- # val= 00:08:09.585 20:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # IFS=: 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # read -r var val 00:08:09.585 20:54:30 -- accel/accel.sh@21 -- # val= 00:08:09.585 20:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # IFS=: 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # read -r var val 00:08:09.585 20:54:30 -- accel/accel.sh@21 -- # val= 00:08:09.585 20:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.585 20:54:30 -- accel/accel.sh@20 -- # IFS=: 00:08:09.586 20:54:30 -- accel/accel.sh@20 -- # read -r var val 00:08:09.586 20:54:30 -- accel/accel.sh@21 -- # val= 00:08:09.586 20:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.586 20:54:30 -- accel/accel.sh@20 -- # IFS=: 00:08:09.586 20:54:30 -- accel/accel.sh@20 -- # read -r var val 00:08:09.586 20:54:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:09.586 20:54:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:09.586 20:54:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.586 00:08:09.586 real 0m4.444s 00:08:09.586 user 0m6.709s 00:08:09.586 sys 0m0.151s 00:08:09.586 20:54:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.586 20:54:30 -- common/autotest_common.sh@10 -- # set +x 00:08:09.586 ************************************ 00:08:09.586 END TEST accel_decomp_full_mcore 00:08:09.586 ************************************ 00:08:09.586 20:54:30 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:09.586 20:54:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:09.586 20:54:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.586 20:54:30 -- common/autotest_common.sh@10 -- # set +x 00:08:09.586 ************************************ 00:08:09.586 START TEST accel_decomp_mthread 00:08:09.586 ************************************ 00:08:09.586 20:54:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:09.586 20:54:30 -- accel/accel.sh@16 -- # local accel_opc 00:08:09.586 20:54:30 -- accel/accel.sh@17 -- # local accel_module 00:08:09.586 20:54:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:09.586 20:54:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:09.586 20:54:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.586 20:54:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.586 20:54:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.586 20:54:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.586 20:54:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.586 20:54:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.586 20:54:30 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.586 20:54:30 -- accel/accel.sh@42 -- # jq -r . 00:08:09.586 [2024-12-08 20:54:30.347955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:09.586 [2024-12-08 20:54:30.348102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60261 ] 00:08:09.586 [2024-12-08 20:54:30.501598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.845 [2024-12-08 20:54:30.643286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.758 20:54:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:11.758 00:08:11.758 SPDK Configuration: 00:08:11.758 Core mask: 0x1 00:08:11.758 00:08:11.758 Accel Perf Configuration: 00:08:11.758 Workload Type: decompress 00:08:11.758 Transfer size: 4096 bytes 00:08:11.758 Vector count 1 00:08:11.758 Module: software 00:08:11.758 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.758 Queue depth: 32 00:08:11.758 Allocate depth: 32 00:08:11.758 # threads/core: 2 00:08:11.758 Run time: 1 seconds 00:08:11.758 Verify: Yes 00:08:11.758 00:08:11.758 Running for 1 seconds... 00:08:11.758 00:08:11.758 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:11.758 ------------------------------------------------------------------------------------ 00:08:11.758 0,1 37568/s 69 MiB/s 0 0 00:08:11.758 0,0 37472/s 69 MiB/s 0 0 00:08:11.758 ==================================================================================== 00:08:11.758 Total 75040/s 293 MiB/s 0 0' 00:08:11.758 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:11.758 20:54:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:11.758 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:11.758 20:54:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:11.758 20:54:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.758 20:54:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.758 20:54:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.758 20:54:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.758 20:54:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.758 20:54:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.758 20:54:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.758 20:54:32 -- accel/accel.sh@42 -- # jq -r . 00:08:11.758 [2024-12-08 20:54:32.474834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.758 [2024-12-08 20:54:32.474986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60287 ] 00:08:11.758 [2024-12-08 20:54:32.643516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.758 [2024-12-08 20:54:32.786255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val= 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val= 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val= 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val=0x1 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val= 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val= 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val=decompress 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val= 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val=software 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@23 -- # accel_module=software 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val=32 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val=32 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val=2 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val=Yes 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val= 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:12.017 20:54:32 -- accel/accel.sh@21 -- # val= 00:08:12.017 20:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:12.017 20:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:13.922 20:54:34 -- accel/accel.sh@21 -- # val= 00:08:13.922 20:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:13.922 20:54:34 -- accel/accel.sh@21 -- # val= 00:08:13.922 20:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:13.922 20:54:34 -- accel/accel.sh@21 -- # val= 00:08:13.922 20:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:13.922 20:54:34 -- accel/accel.sh@21 -- # val= 00:08:13.922 20:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:13.922 20:54:34 -- accel/accel.sh@21 -- # val= 00:08:13.922 20:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:13.922 20:54:34 -- accel/accel.sh@21 -- # val= 00:08:13.922 20:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:13.922 20:54:34 -- accel/accel.sh@21 -- # val= 00:08:13.922 20:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:13.922 20:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:13.922 20:54:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.922 20:54:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:13.922 20:54:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.922 00:08:13.922 real 0m4.262s 00:08:13.922 user 0m3.803s 00:08:13.922 sys 0m0.256s 00:08:13.922 20:54:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.922 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.922 ************************************ 00:08:13.922 END TEST accel_decomp_mthread 00:08:13.922 ************************************ 00:08:13.922 20:54:34 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.922 20:54:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:13.922 20:54:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.922 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.922 ************************************ 00:08:13.922 START TEST accel_deomp_full_mthread 00:08:13.922 ************************************ 00:08:13.922 20:54:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.922 20:54:34 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.922 20:54:34 -- accel/accel.sh@17 -- # local accel_module 00:08:13.922 20:54:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.922 20:54:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.922 20:54:34 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.922 20:54:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.922 20:54:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.922 20:54:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.922 20:54:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.922 20:54:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.923 20:54:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.923 20:54:34 -- accel/accel.sh@42 -- # jq -r . 00:08:13.923 [2024-12-08 20:54:34.664068] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.923 [2024-12-08 20:54:34.664791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60332 ] 00:08:13.923 [2024-12-08 20:54:34.813011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.923 [2024-12-08 20:54:34.954423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.837 20:54:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:15.837 00:08:15.837 SPDK Configuration: 00:08:15.837 Core mask: 0x1 00:08:15.837 00:08:15.837 Accel Perf Configuration: 00:08:15.837 Workload Type: decompress 00:08:15.837 Transfer size: 111250 bytes 00:08:15.837 Vector count 1 00:08:15.837 Module: software 00:08:15.837 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.837 Queue depth: 32 00:08:15.837 Allocate depth: 32 00:08:15.837 # threads/core: 2 00:08:15.837 Run time: 1 seconds 00:08:15.837 Verify: Yes 00:08:15.837 00:08:15.837 Running for 1 seconds... 00:08:15.837 00:08:15.837 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:15.837 ------------------------------------------------------------------------------------ 00:08:15.837 0,1 2720/s 112 MiB/s 0 0 00:08:15.837 0,0 2720/s 112 MiB/s 0 0 00:08:15.837 ==================================================================================== 00:08:15.837 Total 5440/s 577 MiB/s 0 0' 00:08:15.837 20:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.837 20:54:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:15.837 20:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.837 20:54:36 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.837 20:54:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:15.837 20:54:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:15.837 20:54:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.837 20:54:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.837 20:54:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:15.837 20:54:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:15.837 20:54:36 -- accel/accel.sh@41 -- # local IFS=, 00:08:15.837 20:54:36 -- accel/accel.sh@42 -- # jq -r . 00:08:15.837 [2024-12-08 20:54:36.819316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.837 [2024-12-08 20:54:36.819467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60359 ] 00:08:16.097 [2024-12-08 20:54:36.985581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.097 [2024-12-08 20:54:37.136483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.355 20:54:37 -- accel/accel.sh@21 -- # val= 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val= 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val= 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val=0x1 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val= 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val= 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val=decompress 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val= 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val=software 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@23 -- # accel_module=software 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val=32 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val=32 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val=2 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val=Yes 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val= 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.356 20:54:37 -- accel/accel.sh@21 -- # val= 00:08:16.356 20:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:16.356 20:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:18.257 20:54:38 -- accel/accel.sh@21 -- # val= 00:08:18.257 20:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # IFS=: 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # read -r var val 00:08:18.257 20:54:38 -- accel/accel.sh@21 -- # val= 00:08:18.257 20:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # IFS=: 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # read -r var val 00:08:18.257 20:54:38 -- accel/accel.sh@21 -- # val= 00:08:18.257 20:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # IFS=: 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # read -r var val 00:08:18.257 20:54:38 -- accel/accel.sh@21 -- # val= 00:08:18.257 20:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # IFS=: 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # read -r var val 00:08:18.257 20:54:38 -- accel/accel.sh@21 -- # val= 00:08:18.257 20:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # IFS=: 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # read -r var val 00:08:18.257 20:54:38 -- accel/accel.sh@21 -- # val= 00:08:18.257 20:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # IFS=: 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # read -r var val 00:08:18.257 20:54:38 -- accel/accel.sh@21 -- # val= 00:08:18.257 20:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # IFS=: 00:08:18.257 20:54:38 -- accel/accel.sh@20 -- # read -r var val 00:08:18.257 20:54:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:18.257 20:54:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:18.257 20:54:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.257 00:08:18.257 real 0m4.329s 00:08:18.257 user 0m3.876s 00:08:18.257 sys 0m0.248s 00:08:18.258 20:54:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.258 ************************************ 00:08:18.258 END TEST accel_deomp_full_mthread 00:08:18.258 ************************************ 00:08:18.258 20:54:38 -- common/autotest_common.sh@10 -- # set +x 00:08:18.258 20:54:38 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:18.258 20:54:38 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:18.258 20:54:38 -- accel/accel.sh@129 -- # build_accel_config 00:08:18.258 20:54:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:18.258 20:54:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:18.258 20:54:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.258 20:54:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.258 20:54:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.258 20:54:38 -- common/autotest_common.sh@10 -- # set +x 00:08:18.258 20:54:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:18.258 20:54:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:18.258 20:54:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:18.258 20:54:38 -- accel/accel.sh@42 -- # jq -r . 00:08:18.258 ************************************ 00:08:18.258 START TEST accel_dif_functional_tests 00:08:18.258 ************************************ 00:08:18.258 20:54:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:18.258 [2024-12-08 20:54:39.093301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.258 [2024-12-08 20:54:39.093457] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60401 ] 00:08:18.258 [2024-12-08 20:54:39.262831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.516 [2024-12-08 20:54:39.410511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.516 [2024-12-08 20:54:39.410618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.516 [2024-12-08 20:54:39.410625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.775 00:08:18.775 00:08:18.775 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.775 http://cunit.sourceforge.net/ 00:08:18.775 00:08:18.775 00:08:18.775 Suite: accel_dif 00:08:18.775 Test: verify: DIF generated, GUARD check ...passed 00:08:18.775 Test: verify: DIF generated, APPTAG check ...passed 00:08:18.775 Test: verify: DIF generated, REFTAG check ...passed 00:08:18.775 Test: verify: DIF not generated, GUARD check ...passed 00:08:18.775 Test: verify: DIF not generated, APPTAG check ...passed 00:08:18.775 Test: verify: DIF not generated, REFTAG check ...passed 00:08:18.775 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:18.775 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-08 20:54:39.641592] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:18.775 [2024-12-08 20:54:39.641664] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:18.775 [2024-12-08 20:54:39.641722] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:18.775 [2024-12-08 20:54:39.641765] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:18.775 [2024-12-08 20:54:39.641804] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:18.775 [2024-12-08 20:54:39.641837] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:18.775 [2024-12-08 20:54:39.641920] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:18.775 passed 00:08:18.775 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:18.775 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:18.775 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:18.775 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:18.775 Test: generate copy: DIF generated, GUARD check ...passed 00:08:18.775 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:18.775 Test: generate copy: DIF generated, REFTAG check ...[2024-12-08 20:54:39.642186] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:18.775 passed 00:08:18.775 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:18.775 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:18.775 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:18.775 Test: generate copy: iovecs-len validate ...passed 00:08:18.775 Test: generate copy: buffer alignment validate ...[2024-12-08 20:54:39.642575] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:18.775 passed 00:08:18.775 00:08:18.775 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.775 suites 1 1 n/a 0 0 00:08:18.775 tests 20 20 20 0 0 00:08:18.775 asserts 204 204 204 0 n/a 00:08:18.775 00:08:18.775 Elapsed time = 0.003 seconds 00:08:19.709 00:08:19.709 real 0m1.523s 00:08:19.709 user 0m2.886s 00:08:19.709 sys 0m0.190s 00:08:19.709 20:54:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.709 20:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.709 ************************************ 00:08:19.709 END TEST accel_dif_functional_tests 00:08:19.710 ************************************ 00:08:19.710 ************************************ 00:08:19.710 END TEST accel 00:08:19.710 ************************************ 00:08:19.710 00:08:19.710 real 1m34.389s 00:08:19.710 user 1m43.820s 00:08:19.710 sys 0m7.227s 00:08:19.710 20:54:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.710 20:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.710 20:54:40 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:19.710 20:54:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.710 20:54:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.710 20:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.710 ************************************ 00:08:19.710 START TEST accel_rpc 00:08:19.710 ************************************ 00:08:19.710 20:54:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:19.710 * Looking for test storage... 00:08:19.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:19.710 20:54:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:19.710 20:54:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:19.710 20:54:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:19.710 20:54:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:19.710 20:54:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:19.710 20:54:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:19.710 20:54:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:19.710 20:54:40 -- scripts/common.sh@335 -- # IFS=.-: 00:08:19.710 20:54:40 -- scripts/common.sh@335 -- # read -ra ver1 00:08:19.710 20:54:40 -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.710 20:54:40 -- scripts/common.sh@336 -- # read -ra ver2 00:08:19.710 20:54:40 -- scripts/common.sh@337 -- # local 'op=<' 00:08:19.710 20:54:40 -- scripts/common.sh@339 -- # ver1_l=2 00:08:19.710 20:54:40 -- scripts/common.sh@340 -- # ver2_l=1 00:08:19.710 20:54:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:19.710 20:54:40 -- scripts/common.sh@343 -- # case "$op" in 00:08:19.710 20:54:40 -- scripts/common.sh@344 -- # : 1 00:08:19.710 20:54:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:19.710 20:54:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.710 20:54:40 -- scripts/common.sh@364 -- # decimal 1 00:08:19.969 20:54:40 -- scripts/common.sh@352 -- # local d=1 00:08:19.969 20:54:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.969 20:54:40 -- scripts/common.sh@354 -- # echo 1 00:08:19.969 20:54:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:19.969 20:54:40 -- scripts/common.sh@365 -- # decimal 2 00:08:19.969 20:54:40 -- scripts/common.sh@352 -- # local d=2 00:08:19.969 20:54:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.969 20:54:40 -- scripts/common.sh@354 -- # echo 2 00:08:19.969 20:54:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:19.969 20:54:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:19.969 20:54:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:19.969 20:54:40 -- scripts/common.sh@367 -- # return 0 00:08:19.969 20:54:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.969 20:54:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:19.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.969 --rc genhtml_branch_coverage=1 00:08:19.969 --rc genhtml_function_coverage=1 00:08:19.969 --rc genhtml_legend=1 00:08:19.969 --rc geninfo_all_blocks=1 00:08:19.969 --rc geninfo_unexecuted_blocks=1 00:08:19.969 00:08:19.969 ' 00:08:19.969 20:54:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:19.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.969 --rc genhtml_branch_coverage=1 00:08:19.969 --rc genhtml_function_coverage=1 00:08:19.969 --rc genhtml_legend=1 00:08:19.969 --rc geninfo_all_blocks=1 00:08:19.969 --rc geninfo_unexecuted_blocks=1 00:08:19.969 00:08:19.969 ' 00:08:19.969 20:54:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:19.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.969 --rc genhtml_branch_coverage=1 00:08:19.969 --rc genhtml_function_coverage=1 00:08:19.969 --rc genhtml_legend=1 00:08:19.969 --rc geninfo_all_blocks=1 00:08:19.969 --rc geninfo_unexecuted_blocks=1 00:08:19.969 00:08:19.969 ' 00:08:19.969 20:54:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:19.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.969 --rc genhtml_branch_coverage=1 00:08:19.969 --rc genhtml_function_coverage=1 00:08:19.969 --rc genhtml_legend=1 00:08:19.969 --rc geninfo_all_blocks=1 00:08:19.969 --rc geninfo_unexecuted_blocks=1 00:08:19.969 00:08:19.969 ' 00:08:19.969 20:54:40 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:19.969 20:54:40 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=60485 00:08:19.969 20:54:40 -- accel/accel_rpc.sh@15 -- # waitforlisten 60485 00:08:19.969 20:54:40 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:19.969 20:54:40 -- common/autotest_common.sh@829 -- # '[' -z 60485 ']' 00:08:19.969 20:54:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.969 20:54:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.969 20:54:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.969 20:54:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.969 20:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.969 [2024-12-08 20:54:40.869177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:19.969 [2024-12-08 20:54:40.869371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60485 ] 00:08:20.229 [2024-12-08 20:54:41.039324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.229 [2024-12-08 20:54:41.183761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:20.229 [2024-12-08 20:54:41.183987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.797 20:54:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.797 20:54:41 -- common/autotest_common.sh@862 -- # return 0 00:08:20.797 20:54:41 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:20.797 20:54:41 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:20.797 20:54:41 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:20.797 20:54:41 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:20.797 20:54:41 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:20.797 20:54:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:20.797 20:54:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.797 20:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:20.797 ************************************ 00:08:20.797 START TEST accel_assign_opcode 00:08:20.797 ************************************ 00:08:20.797 20:54:41 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:20.797 20:54:41 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:20.797 20:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.797 20:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:20.797 [2024-12-08 20:54:41.760818] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:20.797 20:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.797 20:54:41 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:20.797 20:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.797 20:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:20.797 [2024-12-08 20:54:41.768766] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:20.797 20:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.797 20:54:41 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:20.797 20:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.797 20:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:21.366 20:54:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.366 20:54:42 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:21.366 20:54:42 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:21.366 20:54:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.366 20:54:42 -- accel/accel_rpc.sh@42 -- # grep software 00:08:21.366 20:54:42 -- common/autotest_common.sh@10 -- # set +x 00:08:21.366 20:54:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.366 software 00:08:21.366 ************************************ 00:08:21.366 END TEST accel_assign_opcode 00:08:21.366 ************************************ 00:08:21.366 00:08:21.366 real 0m0.602s 00:08:21.366 user 0m0.044s 00:08:21.366 sys 0m0.009s 00:08:21.366 20:54:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.366 20:54:42 -- common/autotest_common.sh@10 -- # set +x 00:08:21.366 20:54:42 -- accel/accel_rpc.sh@55 -- # killprocess 60485 00:08:21.366 20:54:42 -- common/autotest_common.sh@936 -- # '[' -z 60485 ']' 00:08:21.366 20:54:42 -- common/autotest_common.sh@940 -- # kill -0 60485 00:08:21.366 20:54:42 -- common/autotest_common.sh@941 -- # uname 00:08:21.366 20:54:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.366 20:54:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60485 00:08:21.626 killing process with pid 60485 00:08:21.626 20:54:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:21.626 20:54:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:21.626 20:54:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60485' 00:08:21.626 20:54:42 -- common/autotest_common.sh@955 -- # kill 60485 00:08:21.626 20:54:42 -- common/autotest_common.sh@960 -- # wait 60485 00:08:23.006 00:08:23.006 real 0m3.412s 00:08:23.006 user 0m3.439s 00:08:23.006 sys 0m0.459s 00:08:23.006 20:54:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.006 ************************************ 00:08:23.006 END TEST accel_rpc 00:08:23.006 ************************************ 00:08:23.006 20:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:23.266 20:54:44 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:23.266 20:54:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:23.266 20:54:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.266 20:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:23.266 ************************************ 00:08:23.266 START TEST app_cmdline 00:08:23.266 ************************************ 00:08:23.266 20:54:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:23.266 * Looking for test storage... 00:08:23.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:23.266 20:54:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:23.266 20:54:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:23.266 20:54:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:23.266 20:54:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:23.266 20:54:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:23.266 20:54:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:23.266 20:54:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:23.266 20:54:44 -- scripts/common.sh@335 -- # IFS=.-: 00:08:23.266 20:54:44 -- scripts/common.sh@335 -- # read -ra ver1 00:08:23.266 20:54:44 -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.266 20:54:44 -- scripts/common.sh@336 -- # read -ra ver2 00:08:23.266 20:54:44 -- scripts/common.sh@337 -- # local 'op=<' 00:08:23.266 20:54:44 -- scripts/common.sh@339 -- # ver1_l=2 00:08:23.266 20:54:44 -- scripts/common.sh@340 -- # ver2_l=1 00:08:23.266 20:54:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:23.266 20:54:44 -- scripts/common.sh@343 -- # case "$op" in 00:08:23.266 20:54:44 -- scripts/common.sh@344 -- # : 1 00:08:23.266 20:54:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:23.266 20:54:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.266 20:54:44 -- scripts/common.sh@364 -- # decimal 1 00:08:23.266 20:54:44 -- scripts/common.sh@352 -- # local d=1 00:08:23.266 20:54:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.266 20:54:44 -- scripts/common.sh@354 -- # echo 1 00:08:23.266 20:54:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:23.266 20:54:44 -- scripts/common.sh@365 -- # decimal 2 00:08:23.266 20:54:44 -- scripts/common.sh@352 -- # local d=2 00:08:23.266 20:54:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.266 20:54:44 -- scripts/common.sh@354 -- # echo 2 00:08:23.266 20:54:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:23.266 20:54:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:23.266 20:54:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:23.266 20:54:44 -- scripts/common.sh@367 -- # return 0 00:08:23.266 20:54:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.266 20:54:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:23.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.266 --rc genhtml_branch_coverage=1 00:08:23.266 --rc genhtml_function_coverage=1 00:08:23.266 --rc genhtml_legend=1 00:08:23.266 --rc geninfo_all_blocks=1 00:08:23.266 --rc geninfo_unexecuted_blocks=1 00:08:23.266 00:08:23.266 ' 00:08:23.266 20:54:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:23.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.266 --rc genhtml_branch_coverage=1 00:08:23.266 --rc genhtml_function_coverage=1 00:08:23.266 --rc genhtml_legend=1 00:08:23.266 --rc geninfo_all_blocks=1 00:08:23.266 --rc geninfo_unexecuted_blocks=1 00:08:23.266 00:08:23.266 ' 00:08:23.266 20:54:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:23.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.266 --rc genhtml_branch_coverage=1 00:08:23.266 --rc genhtml_function_coverage=1 00:08:23.266 --rc genhtml_legend=1 00:08:23.266 --rc geninfo_all_blocks=1 00:08:23.266 --rc geninfo_unexecuted_blocks=1 00:08:23.266 00:08:23.266 ' 00:08:23.266 20:54:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:23.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.266 --rc genhtml_branch_coverage=1 00:08:23.266 --rc genhtml_function_coverage=1 00:08:23.266 --rc genhtml_legend=1 00:08:23.266 --rc geninfo_all_blocks=1 00:08:23.266 --rc geninfo_unexecuted_blocks=1 00:08:23.266 00:08:23.266 ' 00:08:23.266 20:54:44 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:23.266 20:54:44 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60608 00:08:23.266 20:54:44 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:23.266 20:54:44 -- app/cmdline.sh@18 -- # waitforlisten 60608 00:08:23.266 20:54:44 -- common/autotest_common.sh@829 -- # '[' -z 60608 ']' 00:08:23.266 20:54:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.266 20:54:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.266 20:54:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.266 20:54:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.266 20:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:23.525 [2024-12-08 20:54:44.403774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.525 [2024-12-08 20:54:44.404190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60608 ] 00:08:23.784 [2024-12-08 20:54:44.575051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.784 [2024-12-08 20:54:44.716096] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:23.784 [2024-12-08 20:54:44.716562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.352 20:54:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.352 20:54:45 -- common/autotest_common.sh@862 -- # return 0 00:08:24.352 20:54:45 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:24.609 { 00:08:24.609 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:08:24.609 "fields": { 00:08:24.609 "major": 24, 00:08:24.609 "minor": 1, 00:08:24.610 "patch": 1, 00:08:24.610 "suffix": "-pre", 00:08:24.610 "commit": "c13c99a5e" 00:08:24.610 } 00:08:24.610 } 00:08:24.610 20:54:45 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:24.610 20:54:45 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:24.610 20:54:45 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:24.610 20:54:45 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:24.610 20:54:45 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:24.610 20:54:45 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:24.610 20:54:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.610 20:54:45 -- app/cmdline.sh@26 -- # sort 00:08:24.610 20:54:45 -- common/autotest_common.sh@10 -- # set +x 00:08:24.610 20:54:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.610 20:54:45 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:24.610 20:54:45 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:24.610 20:54:45 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:24.610 20:54:45 -- common/autotest_common.sh@650 -- # local es=0 00:08:24.610 20:54:45 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:24.610 20:54:45 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.610 20:54:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.610 20:54:45 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.610 20:54:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.610 20:54:45 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.610 20:54:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.610 20:54:45 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.610 20:54:45 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:24.610 20:54:45 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:24.868 request: 00:08:24.868 { 00:08:24.868 "method": "env_dpdk_get_mem_stats", 00:08:24.868 "req_id": 1 00:08:24.868 } 00:08:24.868 Got JSON-RPC error response 00:08:24.868 response: 00:08:24.868 { 00:08:24.868 "code": -32601, 00:08:24.868 "message": "Method not found" 00:08:24.868 } 00:08:24.868 20:54:45 -- common/autotest_common.sh@653 -- # es=1 00:08:24.868 20:54:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.868 20:54:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.868 20:54:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.868 20:54:45 -- app/cmdline.sh@1 -- # killprocess 60608 00:08:24.868 20:54:45 -- common/autotest_common.sh@936 -- # '[' -z 60608 ']' 00:08:24.868 20:54:45 -- common/autotest_common.sh@940 -- # kill -0 60608 00:08:24.868 20:54:45 -- common/autotest_common.sh@941 -- # uname 00:08:24.868 20:54:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:24.868 20:54:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60608 00:08:24.868 killing process with pid 60608 00:08:24.868 20:54:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:24.868 20:54:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:24.868 20:54:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60608' 00:08:24.868 20:54:45 -- common/autotest_common.sh@955 -- # kill 60608 00:08:24.868 20:54:45 -- common/autotest_common.sh@960 -- # wait 60608 00:08:26.775 ************************************ 00:08:26.775 END TEST app_cmdline 00:08:26.775 ************************************ 00:08:26.775 00:08:26.775 real 0m3.355s 00:08:26.775 user 0m3.787s 00:08:26.775 sys 0m0.499s 00:08:26.775 20:54:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.775 20:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:26.775 20:54:47 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:26.775 20:54:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.775 20:54:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.775 20:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:26.775 ************************************ 00:08:26.775 START TEST version 00:08:26.775 ************************************ 00:08:26.775 20:54:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:26.775 * Looking for test storage... 00:08:26.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:26.775 20:54:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:26.775 20:54:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:26.775 20:54:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:26.775 20:54:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:26.775 20:54:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:26.775 20:54:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:26.775 20:54:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:26.775 20:54:47 -- scripts/common.sh@335 -- # IFS=.-: 00:08:26.775 20:54:47 -- scripts/common.sh@335 -- # read -ra ver1 00:08:26.775 20:54:47 -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.775 20:54:47 -- scripts/common.sh@336 -- # read -ra ver2 00:08:26.775 20:54:47 -- scripts/common.sh@337 -- # local 'op=<' 00:08:26.775 20:54:47 -- scripts/common.sh@339 -- # ver1_l=2 00:08:26.775 20:54:47 -- scripts/common.sh@340 -- # ver2_l=1 00:08:26.775 20:54:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:26.775 20:54:47 -- scripts/common.sh@343 -- # case "$op" in 00:08:26.775 20:54:47 -- scripts/common.sh@344 -- # : 1 00:08:26.775 20:54:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:26.775 20:54:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.775 20:54:47 -- scripts/common.sh@364 -- # decimal 1 00:08:26.775 20:54:47 -- scripts/common.sh@352 -- # local d=1 00:08:26.775 20:54:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.775 20:54:47 -- scripts/common.sh@354 -- # echo 1 00:08:26.775 20:54:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:26.775 20:54:47 -- scripts/common.sh@365 -- # decimal 2 00:08:26.775 20:54:47 -- scripts/common.sh@352 -- # local d=2 00:08:26.775 20:54:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.775 20:54:47 -- scripts/common.sh@354 -- # echo 2 00:08:26.775 20:54:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:26.775 20:54:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:26.775 20:54:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:26.775 20:54:47 -- scripts/common.sh@367 -- # return 0 00:08:26.775 20:54:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.775 20:54:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:26.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.775 --rc genhtml_branch_coverage=1 00:08:26.775 --rc genhtml_function_coverage=1 00:08:26.775 --rc genhtml_legend=1 00:08:26.775 --rc geninfo_all_blocks=1 00:08:26.775 --rc geninfo_unexecuted_blocks=1 00:08:26.775 00:08:26.775 ' 00:08:26.775 20:54:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:26.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.775 --rc genhtml_branch_coverage=1 00:08:26.775 --rc genhtml_function_coverage=1 00:08:26.775 --rc genhtml_legend=1 00:08:26.775 --rc geninfo_all_blocks=1 00:08:26.775 --rc geninfo_unexecuted_blocks=1 00:08:26.775 00:08:26.775 ' 00:08:26.775 20:54:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:26.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.775 --rc genhtml_branch_coverage=1 00:08:26.775 --rc genhtml_function_coverage=1 00:08:26.775 --rc genhtml_legend=1 00:08:26.776 --rc geninfo_all_blocks=1 00:08:26.776 --rc geninfo_unexecuted_blocks=1 00:08:26.776 00:08:26.776 ' 00:08:26.776 20:54:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.776 --rc genhtml_branch_coverage=1 00:08:26.776 --rc genhtml_function_coverage=1 00:08:26.776 --rc genhtml_legend=1 00:08:26.776 --rc geninfo_all_blocks=1 00:08:26.776 --rc geninfo_unexecuted_blocks=1 00:08:26.776 00:08:26.776 ' 00:08:26.776 20:54:47 -- app/version.sh@17 -- # get_header_version major 00:08:26.776 20:54:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:26.776 20:54:47 -- app/version.sh@14 -- # cut -f2 00:08:26.776 20:54:47 -- app/version.sh@14 -- # tr -d '"' 00:08:26.776 20:54:47 -- app/version.sh@17 -- # major=24 00:08:26.776 20:54:47 -- app/version.sh@18 -- # get_header_version minor 00:08:26.776 20:54:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:26.776 20:54:47 -- app/version.sh@14 -- # cut -f2 00:08:26.776 20:54:47 -- app/version.sh@14 -- # tr -d '"' 00:08:26.776 20:54:47 -- app/version.sh@18 -- # minor=1 00:08:26.776 20:54:47 -- app/version.sh@19 -- # get_header_version patch 00:08:26.776 20:54:47 -- app/version.sh@14 -- # cut -f2 00:08:26.776 20:54:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:26.776 20:54:47 -- app/version.sh@14 -- # tr -d '"' 00:08:26.776 20:54:47 -- app/version.sh@19 -- # patch=1 00:08:26.776 20:54:47 -- app/version.sh@20 -- # get_header_version suffix 00:08:26.776 20:54:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:26.776 20:54:47 -- app/version.sh@14 -- # cut -f2 00:08:26.776 20:54:47 -- app/version.sh@14 -- # tr -d '"' 00:08:26.776 20:54:47 -- app/version.sh@20 -- # suffix=-pre 00:08:26.776 20:54:47 -- app/version.sh@22 -- # version=24.1 00:08:26.776 20:54:47 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:26.776 20:54:47 -- app/version.sh@25 -- # version=24.1.1 00:08:26.776 20:54:47 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:26.776 20:54:47 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:26.776 20:54:47 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:26.776 20:54:47 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:26.776 20:54:47 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:26.776 00:08:26.776 real 0m0.244s 00:08:26.776 user 0m0.166s 00:08:26.776 sys 0m0.115s 00:08:26.776 ************************************ 00:08:26.776 END TEST version 00:08:26.776 ************************************ 00:08:26.776 20:54:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.776 20:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:26.776 20:54:47 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:26.776 20:54:47 -- spdk/autotest.sh@191 -- # uname -s 00:08:26.776 20:54:47 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:26.776 20:54:47 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:26.776 20:54:47 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:26.776 20:54:47 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:08:26.776 20:54:47 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:26.776 20:54:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:26.776 20:54:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.776 20:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:26.776 ************************************ 00:08:26.776 START TEST blockdev_nvme 00:08:26.776 ************************************ 00:08:26.776 20:54:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:27.036 * Looking for test storage... 00:08:27.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:27.036 20:54:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:27.036 20:54:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:27.036 20:54:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:27.036 20:54:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:27.036 20:54:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:27.036 20:54:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:27.036 20:54:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:27.036 20:54:47 -- scripts/common.sh@335 -- # IFS=.-: 00:08:27.036 20:54:47 -- scripts/common.sh@335 -- # read -ra ver1 00:08:27.036 20:54:47 -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.036 20:54:47 -- scripts/common.sh@336 -- # read -ra ver2 00:08:27.036 20:54:47 -- scripts/common.sh@337 -- # local 'op=<' 00:08:27.036 20:54:47 -- scripts/common.sh@339 -- # ver1_l=2 00:08:27.036 20:54:47 -- scripts/common.sh@340 -- # ver2_l=1 00:08:27.036 20:54:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:27.036 20:54:47 -- scripts/common.sh@343 -- # case "$op" in 00:08:27.036 20:54:47 -- scripts/common.sh@344 -- # : 1 00:08:27.036 20:54:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:27.036 20:54:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.036 20:54:47 -- scripts/common.sh@364 -- # decimal 1 00:08:27.036 20:54:47 -- scripts/common.sh@352 -- # local d=1 00:08:27.036 20:54:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.036 20:54:47 -- scripts/common.sh@354 -- # echo 1 00:08:27.036 20:54:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:27.036 20:54:47 -- scripts/common.sh@365 -- # decimal 2 00:08:27.036 20:54:47 -- scripts/common.sh@352 -- # local d=2 00:08:27.036 20:54:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.036 20:54:47 -- scripts/common.sh@354 -- # echo 2 00:08:27.036 20:54:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:27.036 20:54:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:27.036 20:54:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:27.036 20:54:48 -- scripts/common.sh@367 -- # return 0 00:08:27.036 20:54:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.036 20:54:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:27.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.036 --rc genhtml_branch_coverage=1 00:08:27.036 --rc genhtml_function_coverage=1 00:08:27.036 --rc genhtml_legend=1 00:08:27.036 --rc geninfo_all_blocks=1 00:08:27.036 --rc geninfo_unexecuted_blocks=1 00:08:27.036 00:08:27.036 ' 00:08:27.036 20:54:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:27.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.036 --rc genhtml_branch_coverage=1 00:08:27.036 --rc genhtml_function_coverage=1 00:08:27.036 --rc genhtml_legend=1 00:08:27.036 --rc geninfo_all_blocks=1 00:08:27.036 --rc geninfo_unexecuted_blocks=1 00:08:27.036 00:08:27.036 ' 00:08:27.036 20:54:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:27.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.036 --rc genhtml_branch_coverage=1 00:08:27.036 --rc genhtml_function_coverage=1 00:08:27.036 --rc genhtml_legend=1 00:08:27.036 --rc geninfo_all_blocks=1 00:08:27.036 --rc geninfo_unexecuted_blocks=1 00:08:27.036 00:08:27.036 ' 00:08:27.036 20:54:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:27.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.036 --rc genhtml_branch_coverage=1 00:08:27.036 --rc genhtml_function_coverage=1 00:08:27.036 --rc genhtml_legend=1 00:08:27.036 --rc geninfo_all_blocks=1 00:08:27.036 --rc geninfo_unexecuted_blocks=1 00:08:27.036 00:08:27.036 ' 00:08:27.036 20:54:48 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:27.036 20:54:48 -- bdev/nbd_common.sh@6 -- # set -e 00:08:27.036 20:54:48 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:27.036 20:54:48 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.036 20:54:48 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:27.036 20:54:48 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:27.036 20:54:48 -- bdev/blockdev.sh@18 -- # : 00:08:27.036 20:54:48 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:27.036 20:54:48 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:27.036 20:54:48 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:27.036 20:54:48 -- bdev/blockdev.sh@672 -- # uname -s 00:08:27.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.036 20:54:48 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:27.036 20:54:48 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:27.036 20:54:48 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:08:27.036 20:54:48 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:27.036 20:54:48 -- bdev/blockdev.sh@682 -- # dek= 00:08:27.036 20:54:48 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:27.036 20:54:48 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:27.036 20:54:48 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:27.036 20:54:48 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:08:27.036 20:54:48 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:08:27.036 20:54:48 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:27.036 20:54:48 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60778 00:08:27.036 20:54:48 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:27.036 20:54:48 -- bdev/blockdev.sh@47 -- # waitforlisten 60778 00:08:27.036 20:54:48 -- common/autotest_common.sh@829 -- # '[' -z 60778 ']' 00:08:27.036 20:54:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.036 20:54:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.036 20:54:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.036 20:54:48 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:27.037 20:54:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.037 20:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:27.296 [2024-12-08 20:54:48.128837] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.296 [2024-12-08 20:54:48.129014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60778 ] 00:08:27.296 [2024-12-08 20:54:48.298863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.556 [2024-12-08 20:54:48.449631] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.556 [2024-12-08 20:54:48.449821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.936 20:54:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.936 20:54:49 -- common/autotest_common.sh@862 -- # return 0 00:08:28.936 20:54:49 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:28.936 20:54:49 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:08:28.936 20:54:49 -- bdev/blockdev.sh@79 -- # local json 00:08:28.936 20:54:49 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:28.936 20:54:49 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:28.936 20:54:49 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:28.936 20:54:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.936 20:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:29.196 20:54:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.196 20:54:50 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:29.196 20:54:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.196 20:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:29.196 20:54:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.196 20:54:50 -- bdev/blockdev.sh@738 -- # cat 00:08:29.196 20:54:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:29.196 20:54:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.196 20:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:29.196 20:54:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.196 20:54:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:29.196 20:54:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.196 20:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:29.196 20:54:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.196 20:54:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:29.196 20:54:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.196 20:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:29.196 20:54:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.196 20:54:50 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:29.196 20:54:50 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:29.196 20:54:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.196 20:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:29.196 20:54:50 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:29.196 20:54:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.196 20:54:50 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:29.196 20:54:50 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:29.196 20:54:50 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6c7496a4-4490-478a-9321-a730cf7b9dad"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6c7496a4-4490-478a-9321-a730cf7b9dad",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "209eb4d0-b5f6-4593-ae72-0e1db0d23114"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "209eb4d0-b5f6-4593-ae72-0e1db0d23114",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d51c58b0-b0bf-43e4-a015-c3e13fe74a0f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d51c58b0-b0bf-43e4-a015-c3e13fe74a0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "16b83f32-1e50-444c-b17f-a5f1579536e0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "16b83f32-1e50-444c-b17f-a5f1579536e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c686190c-23c6-4a74-9980-cb2cd4759d2c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c686190c-23c6-4a74-9980-cb2cd4759d2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "18b925cb-9d14-4d1c-9dc7-3704cb6eeded"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "18b925cb-9d14-4d1c-9dc7-3704cb6eeded",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:29.196 20:54:50 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:29.196 20:54:50 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:08:29.196 20:54:50 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:29.196 20:54:50 -- bdev/blockdev.sh@752 -- # killprocess 60778 00:08:29.196 20:54:50 -- common/autotest_common.sh@936 -- # '[' -z 60778 ']' 00:08:29.196 20:54:50 -- common/autotest_common.sh@940 -- # kill -0 60778 00:08:29.196 20:54:50 -- common/autotest_common.sh@941 -- # uname 00:08:29.456 20:54:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:29.456 20:54:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60778 00:08:29.456 killing process with pid 60778 00:08:29.456 20:54:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:29.456 20:54:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:29.456 20:54:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60778' 00:08:29.456 20:54:50 -- common/autotest_common.sh@955 -- # kill 60778 00:08:29.456 20:54:50 -- common/autotest_common.sh@960 -- # wait 60778 00:08:30.836 20:54:51 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:30.836 20:54:51 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:30.836 20:54:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:30.837 20:54:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.837 20:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:31.094 ************************************ 00:08:31.094 START TEST bdev_hello_world 00:08:31.094 ************************************ 00:08:31.094 20:54:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:31.094 [2024-12-08 20:54:51.943237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.094 [2024-12-08 20:54:51.943357] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60875 ] 00:08:31.094 [2024-12-08 20:54:52.092995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.352 [2024-12-08 20:54:52.235397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.919 [2024-12-08 20:54:52.763377] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:31.919 [2024-12-08 20:54:52.763427] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:31.919 [2024-12-08 20:54:52.763452] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:31.919 [2024-12-08 20:54:52.765979] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:31.919 [2024-12-08 20:54:52.766532] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:31.919 [2024-12-08 20:54:52.766676] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:31.919 [2024-12-08 20:54:52.766936] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:31.919 00:08:31.919 [2024-12-08 20:54:52.766967] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:32.854 00:08:32.854 real 0m1.660s 00:08:32.854 user 0m1.383s 00:08:32.854 sys 0m0.176s 00:08:32.854 ************************************ 00:08:32.854 END TEST bdev_hello_world 00:08:32.854 ************************************ 00:08:32.854 20:54:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.854 20:54:53 -- common/autotest_common.sh@10 -- # set +x 00:08:32.854 20:54:53 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:32.854 20:54:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:32.854 20:54:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.854 20:54:53 -- common/autotest_common.sh@10 -- # set +x 00:08:32.854 ************************************ 00:08:32.854 START TEST bdev_bounds 00:08:32.854 ************************************ 00:08:32.854 20:54:53 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:32.854 Process bdevio pid: 60912 00:08:32.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.854 20:54:53 -- bdev/blockdev.sh@288 -- # bdevio_pid=60912 00:08:32.854 20:54:53 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:32.854 20:54:53 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60912' 00:08:32.854 20:54:53 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:32.854 20:54:53 -- bdev/blockdev.sh@291 -- # waitforlisten 60912 00:08:32.854 20:54:53 -- common/autotest_common.sh@829 -- # '[' -z 60912 ']' 00:08:32.854 20:54:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.854 20:54:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.854 20:54:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.854 20:54:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.854 20:54:53 -- common/autotest_common.sh@10 -- # set +x 00:08:32.854 [2024-12-08 20:54:53.685090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.854 [2024-12-08 20:54:53.685265] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60912 ] 00:08:32.854 [2024-12-08 20:54:53.852917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:33.112 [2024-12-08 20:54:53.999887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.112 [2024-12-08 20:54:53.999989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.113 [2024-12-08 20:54:54.000009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.678 20:54:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.679 20:54:54 -- common/autotest_common.sh@862 -- # return 0 00:08:33.679 20:54:54 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:33.937 I/O targets: 00:08:33.937 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:33.937 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:33.937 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:33.937 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:33.937 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:33.937 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:33.937 00:08:33.937 00:08:33.938 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.938 http://cunit.sourceforge.net/ 00:08:33.938 00:08:33.938 00:08:33.938 Suite: bdevio tests on: Nvme3n1 00:08:33.938 Test: blockdev write read block ...passed 00:08:33.938 Test: blockdev write zeroes read block ...passed 00:08:33.938 Test: blockdev write zeroes read no split ...passed 00:08:33.938 Test: blockdev write zeroes read split ...passed 00:08:33.938 Test: blockdev write zeroes read split partial ...passed 00:08:33.938 Test: blockdev reset ...[2024-12-08 20:54:54.786891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:33.938 passed 00:08:33.938 Test: blockdev write read 8 blocks ...[2024-12-08 20:54:54.790268] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.938 passed 00:08:33.938 Test: blockdev write read size > 128k ...passed 00:08:33.938 Test: blockdev write read invalid size ...passed 00:08:33.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.938 Test: blockdev write read max offset ...passed 00:08:33.938 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.938 Test: blockdev writev readv 8 blocks ...passed 00:08:33.938 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.938 Test: blockdev writev readv block ...passed 00:08:33.938 Test: blockdev writev readv size > 128k ...passed 00:08:33.938 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.938 Test: blockdev comparev and writev ...[2024-12-08 20:54:54.799187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27760e000 len:0x1000 00:08:33.938 [2024-12-08 20:54:54.799246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.938 passed 00:08:33.938 Test: blockdev nvme passthru rw ...passed 00:08:33.938 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.938 Test: blockdev nvme admin passthru ...[2024-12-08 20:54:54.800158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.938 [2024-12-08 20:54:54.800205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.938 passed 00:08:33.938 Test: blockdev copy ...passed 00:08:33.938 Suite: bdevio tests on: Nvme2n3 00:08:33.938 Test: blockdev write read block ...passed 00:08:33.938 Test: blockdev write zeroes read block ...passed 00:08:33.938 Test: blockdev write zeroes read no split ...passed 00:08:33.938 Test: blockdev write zeroes read split ...passed 00:08:33.938 Test: blockdev write zeroes read split partial ...passed 00:08:33.938 Test: blockdev reset ...[2024-12-08 20:54:54.858356] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:33.938 [2024-12-08 20:54:54.862678] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.938 passed 00:08:33.938 Test: blockdev write read 8 blocks ...passed 00:08:33.938 Test: blockdev write read size > 128k ...passed 00:08:33.938 Test: blockdev write read invalid size ...passed 00:08:33.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.938 Test: blockdev write read max offset ...passed 00:08:33.938 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.938 Test: blockdev writev readv 8 blocks ...passed 00:08:33.938 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.938 Test: blockdev writev readv block ...passed 00:08:33.938 Test: blockdev writev readv size > 128k ...passed 00:08:33.938 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.938 Test: blockdev comparev and writev ...[2024-12-08 20:54:54.872241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:08:33.938 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x27760a000 len:0x1000 00:08:33.938 [2024-12-08 20:54:54.872488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.938 passed 00:08:33.938 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.938 Test: blockdev nvme admin passthru ...[2024-12-08 20:54:54.873306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.938 [2024-12-08 20:54:54.873349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.938 passed 00:08:33.938 Test: blockdev copy ...passed 00:08:33.938 Suite: bdevio tests on: Nvme2n2 00:08:33.938 Test: blockdev write read block ...passed 00:08:33.938 Test: blockdev write zeroes read block ...passed 00:08:33.938 Test: blockdev write zeroes read no split ...passed 00:08:33.938 Test: blockdev write zeroes read split ...passed 00:08:33.938 Test: blockdev write zeroes read split partial ...passed 00:08:33.938 Test: blockdev reset ...[2024-12-08 20:54:54.929891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:33.938 [2024-12-08 20:54:54.933415] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.938 passed 00:08:33.938 Test: blockdev write read 8 blocks ...passed 00:08:33.938 Test: blockdev write read size > 128k ...passed 00:08:33.938 Test: blockdev write read invalid size ...passed 00:08:33.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.938 Test: blockdev write read max offset ...passed 00:08:33.938 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.938 Test: blockdev writev readv 8 blocks ...passed 00:08:33.938 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.938 Test: blockdev writev readv block ...passed 00:08:33.938 Test: blockdev writev readv size > 128k ...passed 00:08:33.938 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.938 Test: blockdev comparev and writev ...[2024-12-08 20:54:54.942714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271e06000 len:0x1000 00:08:33.938 [2024-12-08 20:54:54.942793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.938 passed 00:08:33.938 Test: blockdev nvme passthru rw ...passed 00:08:33.938 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.938 Test: blockdev nvme admin passthru ...[2024-12-08 20:54:54.943699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.938 [2024-12-08 20:54:54.943738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.938 passed 00:08:33.938 Test: blockdev copy ...passed 00:08:33.938 Suite: bdevio tests on: Nvme2n1 00:08:33.938 Test: blockdev write read block ...passed 00:08:33.938 Test: blockdev write zeroes read block ...passed 00:08:33.938 Test: blockdev write zeroes read no split ...passed 00:08:34.201 Test: blockdev write zeroes read split ...passed 00:08:34.201 Test: blockdev write zeroes read split partial ...passed 00:08:34.201 Test: blockdev reset ...[2024-12-08 20:54:55.002829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:34.201 [2024-12-08 20:54:55.006318] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:34.201 passed 00:08:34.201 Test: blockdev write read 8 blocks ...passed 00:08:34.201 Test: blockdev write read size > 128k ...passed 00:08:34.201 Test: blockdev write read invalid size ...passed 00:08:34.201 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:34.201 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:34.201 Test: blockdev write read max offset ...passed 00:08:34.201 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:34.201 Test: blockdev writev readv 8 blocks ...passed 00:08:34.201 Test: blockdev writev readv 30 x 1block ...passed 00:08:34.201 Test: blockdev writev readv block ...passed 00:08:34.201 Test: blockdev writev readv size > 128k ...passed 00:08:34.201 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:34.201 Test: blockdev comparev and writev ...[2024-12-08 20:54:55.015701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271e01000 len:0x1000 00:08:34.201 [2024-12-08 20:54:55.015770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:34.201 passed 00:08:34.201 Test: blockdev nvme passthru rw ...passed 00:08:34.201 Test: blockdev nvme passthru vendor specific ...[2024-12-08 20:54:55.016614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:34.201 [2024-12-08 20:54:55.016657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:34.201 passed 00:08:34.201 Test: blockdev nvme admin passthru ...passed 00:08:34.201 Test: blockdev copy ...passed 00:08:34.201 Suite: bdevio tests on: Nvme1n1 00:08:34.201 Test: blockdev write read block ...passed 00:08:34.201 Test: blockdev write zeroes read block ...passed 00:08:34.201 Test: blockdev write zeroes read no split ...passed 00:08:34.201 Test: blockdev write zeroes read split ...passed 00:08:34.201 Test: blockdev write zeroes read split partial ...passed 00:08:34.201 Test: blockdev reset ...[2024-12-08 20:54:55.074299] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:34.201 [2024-12-08 20:54:55.077678] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:34.201 passed 00:08:34.201 Test: blockdev write read 8 blocks ...passed 00:08:34.201 Test: blockdev write read size > 128k ...passed 00:08:34.201 Test: blockdev write read invalid size ...passed 00:08:34.201 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:34.201 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:34.201 Test: blockdev write read max offset ...passed 00:08:34.201 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:34.201 Test: blockdev writev readv 8 blocks ...passed 00:08:34.201 Test: blockdev writev readv 30 x 1block ...passed 00:08:34.201 Test: blockdev writev readv block ...passed 00:08:34.201 Test: blockdev writev readv size > 128k ...passed 00:08:34.201 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:34.201 Test: blockdev comparev and writev ...[2024-12-08 20:54:55.086963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:08:34.201 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x27ac06000 len:0x1000 00:08:34.201 [2024-12-08 20:54:55.087174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:34.201 passed 00:08:34.201 Test: blockdev nvme passthru vendor specific ...passed 00:08:34.201 Test: blockdev nvme admin passthru ...[2024-12-08 20:54:55.088025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:34.201 [2024-12-08 20:54:55.088068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:34.201 passed 00:08:34.201 Test: blockdev copy ...passed 00:08:34.201 Suite: bdevio tests on: Nvme0n1 00:08:34.201 Test: blockdev write read block ...passed 00:08:34.201 Test: blockdev write zeroes read block ...passed 00:08:34.201 Test: blockdev write zeroes read no split ...passed 00:08:34.201 Test: blockdev write zeroes read split ...passed 00:08:34.201 Test: blockdev write zeroes read split partial ...passed 00:08:34.201 Test: blockdev reset ...[2024-12-08 20:54:55.146538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:34.201 [2024-12-08 20:54:55.149872] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:34.201 passed 00:08:34.201 Test: blockdev write read 8 blocks ...passed 00:08:34.201 Test: blockdev write read size > 128k ...passed 00:08:34.201 Test: blockdev write read invalid size ...passed 00:08:34.201 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:34.201 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:34.201 Test: blockdev write read max offset ...passed 00:08:34.201 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:34.201 Test: blockdev writev readv 8 blocks ...passed 00:08:34.201 Test: blockdev writev readv 30 x 1block ...passed 00:08:34.201 Test: blockdev writev readv block ...passed 00:08:34.201 Test: blockdev writev readv size > 128k ...passed 00:08:34.201 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:34.201 Test: blockdev comparev and writev ...passed 00:08:34.201 Test: blockdev nvme passthru rw ...[2024-12-08 20:54:55.157673] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:34.201 separate metadata which is not supported yet. 00:08:34.201 passed 00:08:34.201 Test: blockdev nvme passthru vendor specific ...passed 00:08:34.201 Test: blockdev nvme admin passthru ...[2024-12-08 20:54:55.158220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:34.201 [2024-12-08 20:54:55.158264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:34.201 passed 00:08:34.201 Test: blockdev copy ...passed 00:08:34.201 00:08:34.201 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.201 suites 6 6 n/a 0 0 00:08:34.201 tests 138 138 138 0 0 00:08:34.201 asserts 893 893 893 0 n/a 00:08:34.201 00:08:34.201 Elapsed time = 1.157 seconds 00:08:34.201 0 00:08:34.201 20:54:55 -- bdev/blockdev.sh@293 -- # killprocess 60912 00:08:34.201 20:54:55 -- common/autotest_common.sh@936 -- # '[' -z 60912 ']' 00:08:34.201 20:54:55 -- common/autotest_common.sh@940 -- # kill -0 60912 00:08:34.201 20:54:55 -- common/autotest_common.sh@941 -- # uname 00:08:34.201 20:54:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:34.201 20:54:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60912 00:08:34.201 20:54:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:34.201 20:54:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:34.201 20:54:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60912' 00:08:34.201 killing process with pid 60912 00:08:34.201 20:54:55 -- common/autotest_common.sh@955 -- # kill 60912 00:08:34.201 20:54:55 -- common/autotest_common.sh@960 -- # wait 60912 00:08:35.138 20:54:55 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:35.138 00:08:35.138 real 0m2.400s 00:08:35.138 user 0m6.007s 00:08:35.138 sys 0m0.344s 00:08:35.138 20:54:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.138 ************************************ 00:08:35.138 END TEST bdev_bounds 00:08:35.138 ************************************ 00:08:35.138 20:54:55 -- common/autotest_common.sh@10 -- # set +x 00:08:35.138 20:54:56 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:35.138 20:54:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:35.138 20:54:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.138 20:54:56 -- common/autotest_common.sh@10 -- # set +x 00:08:35.138 ************************************ 00:08:35.138 START TEST bdev_nbd 00:08:35.138 ************************************ 00:08:35.138 20:54:56 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:35.138 20:54:56 -- bdev/blockdev.sh@298 -- # uname -s 00:08:35.138 20:54:56 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:35.138 20:54:56 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.138 20:54:56 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:35.138 20:54:56 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:35.138 20:54:56 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:35.138 20:54:56 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:08:35.138 20:54:56 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:35.138 20:54:56 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:35.138 20:54:56 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:35.138 20:54:56 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:08:35.138 20:54:56 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:35.138 20:54:56 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:35.138 20:54:56 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:35.138 20:54:56 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:35.138 20:54:56 -- bdev/blockdev.sh@316 -- # nbd_pid=60971 00:08:35.138 20:54:56 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:35.138 20:54:56 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:35.138 20:54:56 -- bdev/blockdev.sh@318 -- # waitforlisten 60971 /var/tmp/spdk-nbd.sock 00:08:35.138 20:54:56 -- common/autotest_common.sh@829 -- # '[' -z 60971 ']' 00:08:35.138 20:54:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:35.138 20:54:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.138 20:54:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:35.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:35.138 20:54:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.138 20:54:56 -- common/autotest_common.sh@10 -- # set +x 00:08:35.138 [2024-12-08 20:54:56.119145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:35.138 [2024-12-08 20:54:56.119482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.396 [2024-12-08 20:54:56.275506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.396 [2024-12-08 20:54:56.426129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.341 20:54:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.341 20:54:57 -- common/autotest_common.sh@862 -- # return 0 00:08:36.341 20:54:57 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@24 -- # local i 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:36.341 20:54:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:36.341 20:54:57 -- common/autotest_common.sh@867 -- # local i 00:08:36.341 20:54:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:36.341 20:54:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:36.341 20:54:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:36.341 20:54:57 -- common/autotest_common.sh@871 -- # break 00:08:36.341 20:54:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:36.341 20:54:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:36.341 20:54:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:36.341 1+0 records in 00:08:36.341 1+0 records out 00:08:36.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579604 s, 7.1 MB/s 00:08:36.341 20:54:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:36.341 20:54:57 -- common/autotest_common.sh@884 -- # size=4096 00:08:36.341 20:54:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:36.341 20:54:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:36.341 20:54:57 -- common/autotest_common.sh@887 -- # return 0 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:36.341 20:54:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:36.601 20:54:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:36.601 20:54:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:36.601 20:54:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:36.601 20:54:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:36.601 20:54:57 -- common/autotest_common.sh@867 -- # local i 00:08:36.601 20:54:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:36.601 20:54:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:36.601 20:54:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:36.601 20:54:57 -- common/autotest_common.sh@871 -- # break 00:08:36.601 20:54:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:36.601 20:54:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:36.601 20:54:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:36.601 1+0 records in 00:08:36.601 1+0 records out 00:08:36.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540569 s, 7.6 MB/s 00:08:36.601 20:54:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:36.601 20:54:57 -- common/autotest_common.sh@884 -- # size=4096 00:08:36.601 20:54:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:36.601 20:54:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:36.601 20:54:57 -- common/autotest_common.sh@887 -- # return 0 00:08:36.601 20:54:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:36.601 20:54:57 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:36.601 20:54:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:36.861 20:54:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:36.861 20:54:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:36.861 20:54:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:36.861 20:54:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:36.861 20:54:57 -- common/autotest_common.sh@867 -- # local i 00:08:36.862 20:54:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:36.862 20:54:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:36.862 20:54:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:36.862 20:54:57 -- common/autotest_common.sh@871 -- # break 00:08:36.862 20:54:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:36.862 20:54:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:36.862 20:54:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:36.862 1+0 records in 00:08:36.862 1+0 records out 00:08:36.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629007 s, 6.5 MB/s 00:08:36.862 20:54:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:36.862 20:54:57 -- common/autotest_common.sh@884 -- # size=4096 00:08:36.862 20:54:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:36.862 20:54:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:36.862 20:54:57 -- common/autotest_common.sh@887 -- # return 0 00:08:36.862 20:54:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:36.862 20:54:57 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:37.122 20:54:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:37.122 20:54:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:37.122 20:54:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:37.122 20:54:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:37.122 20:54:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:37.122 20:54:58 -- common/autotest_common.sh@867 -- # local i 00:08:37.122 20:54:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:37.122 20:54:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:37.122 20:54:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:37.122 20:54:58 -- common/autotest_common.sh@871 -- # break 00:08:37.122 20:54:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:37.122 20:54:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:37.122 20:54:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.122 1+0 records in 00:08:37.122 1+0 records out 00:08:37.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137142 s, 3.0 MB/s 00:08:37.122 20:54:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.122 20:54:58 -- common/autotest_common.sh@884 -- # size=4096 00:08:37.122 20:54:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.122 20:54:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:37.122 20:54:58 -- common/autotest_common.sh@887 -- # return 0 00:08:37.122 20:54:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:37.122 20:54:58 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:37.122 20:54:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:37.690 20:54:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:37.690 20:54:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:37.690 20:54:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:37.690 20:54:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:37.690 20:54:58 -- common/autotest_common.sh@867 -- # local i 00:08:37.690 20:54:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:37.690 20:54:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:37.690 20:54:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:37.690 20:54:58 -- common/autotest_common.sh@871 -- # break 00:08:37.690 20:54:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:37.690 20:54:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:37.690 20:54:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.690 1+0 records in 00:08:37.690 1+0 records out 00:08:37.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749892 s, 5.5 MB/s 00:08:37.690 20:54:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.690 20:54:58 -- common/autotest_common.sh@884 -- # size=4096 00:08:37.690 20:54:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.690 20:54:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:37.690 20:54:58 -- common/autotest_common.sh@887 -- # return 0 00:08:37.690 20:54:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:37.690 20:54:58 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:37.690 20:54:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:37.950 20:54:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:37.950 20:54:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:37.950 20:54:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:37.950 20:54:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:37.950 20:54:58 -- common/autotest_common.sh@867 -- # local i 00:08:37.950 20:54:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:37.950 20:54:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:37.950 20:54:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:37.950 20:54:58 -- common/autotest_common.sh@871 -- # break 00:08:37.950 20:54:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:37.950 20:54:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:37.950 20:54:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.950 1+0 records in 00:08:37.950 1+0 records out 00:08:37.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870782 s, 4.7 MB/s 00:08:37.950 20:54:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.950 20:54:58 -- common/autotest_common.sh@884 -- # size=4096 00:08:37.950 20:54:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.950 20:54:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:37.950 20:54:58 -- common/autotest_common.sh@887 -- # return 0 00:08:37.950 20:54:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:37.950 20:54:58 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:37.950 20:54:58 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.209 20:54:59 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:38.209 { 00:08:38.209 "nbd_device": "/dev/nbd0", 00:08:38.209 "bdev_name": "Nvme0n1" 00:08:38.209 }, 00:08:38.209 { 00:08:38.209 "nbd_device": "/dev/nbd1", 00:08:38.209 "bdev_name": "Nvme1n1" 00:08:38.209 }, 00:08:38.209 { 00:08:38.209 "nbd_device": "/dev/nbd2", 00:08:38.209 "bdev_name": "Nvme2n1" 00:08:38.209 }, 00:08:38.209 { 00:08:38.210 "nbd_device": "/dev/nbd3", 00:08:38.210 "bdev_name": "Nvme2n2" 00:08:38.210 }, 00:08:38.210 { 00:08:38.210 "nbd_device": "/dev/nbd4", 00:08:38.210 "bdev_name": "Nvme2n3" 00:08:38.210 }, 00:08:38.210 { 00:08:38.210 "nbd_device": "/dev/nbd5", 00:08:38.210 "bdev_name": "Nvme3n1" 00:08:38.210 } 00:08:38.210 ]' 00:08:38.210 20:54:59 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:38.210 20:54:59 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:38.210 { 00:08:38.210 "nbd_device": "/dev/nbd0", 00:08:38.210 "bdev_name": "Nvme0n1" 00:08:38.210 }, 00:08:38.210 { 00:08:38.210 "nbd_device": "/dev/nbd1", 00:08:38.210 "bdev_name": "Nvme1n1" 00:08:38.210 }, 00:08:38.210 { 00:08:38.210 "nbd_device": "/dev/nbd2", 00:08:38.210 "bdev_name": "Nvme2n1" 00:08:38.210 }, 00:08:38.210 { 00:08:38.210 "nbd_device": "/dev/nbd3", 00:08:38.210 "bdev_name": "Nvme2n2" 00:08:38.210 }, 00:08:38.210 { 00:08:38.210 "nbd_device": "/dev/nbd4", 00:08:38.210 "bdev_name": "Nvme2n3" 00:08:38.210 }, 00:08:38.210 { 00:08:38.210 "nbd_device": "/dev/nbd5", 00:08:38.210 "bdev_name": "Nvme3n1" 00:08:38.210 } 00:08:38.210 ]' 00:08:38.210 20:54:59 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:38.210 20:54:59 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:38.210 20:54:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.210 20:54:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:38.210 20:54:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:38.210 20:54:59 -- bdev/nbd_common.sh@51 -- # local i 00:08:38.210 20:54:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.210 20:54:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:38.469 20:54:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:38.469 20:54:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:38.469 20:54:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:38.469 20:54:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.469 20:54:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.469 20:54:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:38.469 20:54:59 -- bdev/nbd_common.sh@41 -- # break 00:08:38.469 20:54:59 -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.469 20:54:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.469 20:54:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:38.728 20:54:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:38.728 20:54:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:38.728 20:54:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:38.728 20:54:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.728 20:54:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.728 20:54:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:38.728 20:54:59 -- bdev/nbd_common.sh@41 -- # break 00:08:38.728 20:54:59 -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.728 20:54:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.728 20:54:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@41 -- # break 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.988 20:54:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:38.988 20:55:00 -- bdev/nbd_common.sh@41 -- # break 00:08:38.988 20:55:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.988 20:55:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.988 20:55:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:39.247 20:55:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:39.247 20:55:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:39.247 20:55:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:39.247 20:55:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.247 20:55:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.247 20:55:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:39.247 20:55:00 -- bdev/nbd_common.sh@41 -- # break 00:08:39.247 20:55:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.247 20:55:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.247 20:55:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@41 -- # break 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.507 20:55:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@65 -- # true 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@65 -- # count=0 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@122 -- # count=0 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@127 -- # return 0 00:08:39.767 20:55:00 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:39.767 20:55:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.768 20:55:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:39.768 20:55:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:39.768 20:55:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:39.768 20:55:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:39.768 20:55:00 -- bdev/nbd_common.sh@12 -- # local i 00:08:39.768 20:55:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:39.768 20:55:00 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:39.768 20:55:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:40.030 /dev/nbd0 00:08:40.030 20:55:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:40.030 20:55:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:40.030 20:55:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:40.030 20:55:01 -- common/autotest_common.sh@867 -- # local i 00:08:40.030 20:55:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:40.030 20:55:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:40.030 20:55:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:40.030 20:55:01 -- common/autotest_common.sh@871 -- # break 00:08:40.030 20:55:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:40.030 20:55:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:40.030 20:55:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.030 1+0 records in 00:08:40.030 1+0 records out 00:08:40.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007581 s, 5.4 MB/s 00:08:40.030 20:55:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.030 20:55:01 -- common/autotest_common.sh@884 -- # size=4096 00:08:40.030 20:55:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.030 20:55:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:40.030 20:55:01 -- common/autotest_common.sh@887 -- # return 0 00:08:40.030 20:55:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.030 20:55:01 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:40.030 20:55:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:40.290 /dev/nbd1 00:08:40.290 20:55:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:40.290 20:55:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:40.290 20:55:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:40.290 20:55:01 -- common/autotest_common.sh@867 -- # local i 00:08:40.290 20:55:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:40.290 20:55:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:40.290 20:55:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:40.290 20:55:01 -- common/autotest_common.sh@871 -- # break 00:08:40.290 20:55:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:40.290 20:55:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:40.290 20:55:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.290 1+0 records in 00:08:40.290 1+0 records out 00:08:40.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672317 s, 6.1 MB/s 00:08:40.290 20:55:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.290 20:55:01 -- common/autotest_common.sh@884 -- # size=4096 00:08:40.290 20:55:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.290 20:55:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:40.290 20:55:01 -- common/autotest_common.sh@887 -- # return 0 00:08:40.290 20:55:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.290 20:55:01 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:40.290 20:55:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:40.548 /dev/nbd10 00:08:40.548 20:55:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:40.548 20:55:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:40.548 20:55:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:40.548 20:55:01 -- common/autotest_common.sh@867 -- # local i 00:08:40.548 20:55:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:40.548 20:55:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:40.548 20:55:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:40.548 20:55:01 -- common/autotest_common.sh@871 -- # break 00:08:40.548 20:55:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:40.548 20:55:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:40.548 20:55:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.548 1+0 records in 00:08:40.548 1+0 records out 00:08:40.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00370327 s, 1.1 MB/s 00:08:40.548 20:55:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.548 20:55:01 -- common/autotest_common.sh@884 -- # size=4096 00:08:40.548 20:55:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.548 20:55:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:40.548 20:55:01 -- common/autotest_common.sh@887 -- # return 0 00:08:40.548 20:55:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.548 20:55:01 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:40.548 20:55:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:40.807 /dev/nbd11 00:08:40.807 20:55:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:40.807 20:55:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:40.807 20:55:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:40.807 20:55:01 -- common/autotest_common.sh@867 -- # local i 00:08:40.807 20:55:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:40.807 20:55:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:40.807 20:55:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:40.807 20:55:01 -- common/autotest_common.sh@871 -- # break 00:08:40.807 20:55:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:40.807 20:55:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:40.807 20:55:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.807 1+0 records in 00:08:40.807 1+0 records out 00:08:40.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765197 s, 5.4 MB/s 00:08:40.807 20:55:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.807 20:55:01 -- common/autotest_common.sh@884 -- # size=4096 00:08:40.807 20:55:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.807 20:55:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:40.807 20:55:01 -- common/autotest_common.sh@887 -- # return 0 00:08:40.807 20:55:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.807 20:55:01 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:40.807 20:55:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:41.065 /dev/nbd12 00:08:41.065 20:55:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:41.065 20:55:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:41.065 20:55:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:41.065 20:55:02 -- common/autotest_common.sh@867 -- # local i 00:08:41.065 20:55:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.065 20:55:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.065 20:55:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:41.065 20:55:02 -- common/autotest_common.sh@871 -- # break 00:08:41.065 20:55:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.065 20:55:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.065 20:55:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.065 1+0 records in 00:08:41.065 1+0 records out 00:08:41.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735507 s, 5.6 MB/s 00:08:41.065 20:55:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.065 20:55:02 -- common/autotest_common.sh@884 -- # size=4096 00:08:41.065 20:55:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.065 20:55:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:41.065 20:55:02 -- common/autotest_common.sh@887 -- # return 0 00:08:41.065 20:55:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.065 20:55:02 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:41.065 20:55:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:41.324 /dev/nbd13 00:08:41.324 20:55:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:41.324 20:55:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:41.324 20:55:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:41.324 20:55:02 -- common/autotest_common.sh@867 -- # local i 00:08:41.324 20:55:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.324 20:55:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.324 20:55:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:41.324 20:55:02 -- common/autotest_common.sh@871 -- # break 00:08:41.324 20:55:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.324 20:55:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.324 20:55:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.324 1+0 records in 00:08:41.324 1+0 records out 00:08:41.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740628 s, 5.5 MB/s 00:08:41.324 20:55:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.324 20:55:02 -- common/autotest_common.sh@884 -- # size=4096 00:08:41.324 20:55:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.324 20:55:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:41.324 20:55:02 -- common/autotest_common.sh@887 -- # return 0 00:08:41.324 20:55:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.324 20:55:02 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:41.324 20:55:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.324 20:55:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.324 20:55:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.583 20:55:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd0", 00:08:41.583 "bdev_name": "Nvme0n1" 00:08:41.583 }, 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd1", 00:08:41.583 "bdev_name": "Nvme1n1" 00:08:41.583 }, 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd10", 00:08:41.583 "bdev_name": "Nvme2n1" 00:08:41.583 }, 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd11", 00:08:41.583 "bdev_name": "Nvme2n2" 00:08:41.583 }, 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd12", 00:08:41.583 "bdev_name": "Nvme2n3" 00:08:41.583 }, 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd13", 00:08:41.583 "bdev_name": "Nvme3n1" 00:08:41.583 } 00:08:41.583 ]' 00:08:41.583 20:55:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd0", 00:08:41.583 "bdev_name": "Nvme0n1" 00:08:41.583 }, 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd1", 00:08:41.583 "bdev_name": "Nvme1n1" 00:08:41.583 }, 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd10", 00:08:41.583 "bdev_name": "Nvme2n1" 00:08:41.583 }, 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd11", 00:08:41.583 "bdev_name": "Nvme2n2" 00:08:41.583 }, 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd12", 00:08:41.583 "bdev_name": "Nvme2n3" 00:08:41.583 }, 00:08:41.583 { 00:08:41.583 "nbd_device": "/dev/nbd13", 00:08:41.583 "bdev_name": "Nvme3n1" 00:08:41.583 } 00:08:41.583 ]' 00:08:41.583 20:55:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.583 20:55:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:41.583 /dev/nbd1 00:08:41.583 /dev/nbd10 00:08:41.583 /dev/nbd11 00:08:41.583 /dev/nbd12 00:08:41.583 /dev/nbd13' 00:08:41.583 20:55:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:41.583 /dev/nbd1 00:08:41.583 /dev/nbd10 00:08:41.583 /dev/nbd11 00:08:41.583 /dev/nbd12 00:08:41.583 /dev/nbd13' 00:08:41.583 20:55:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@65 -- # count=6 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@66 -- # echo 6 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@95 -- # count=6 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:41.842 256+0 records in 00:08:41.842 256+0 records out 00:08:41.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00764371 s, 137 MB/s 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:41.842 256+0 records in 00:08:41.842 256+0 records out 00:08:41.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16792 s, 6.2 MB/s 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.842 20:55:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:42.102 256+0 records in 00:08:42.102 256+0 records out 00:08:42.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171599 s, 6.1 MB/s 00:08:42.102 20:55:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:42.102 20:55:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:42.102 256+0 records in 00:08:42.102 256+0 records out 00:08:42.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147939 s, 7.1 MB/s 00:08:42.102 20:55:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:42.102 20:55:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:42.361 256+0 records in 00:08:42.361 256+0 records out 00:08:42.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168582 s, 6.2 MB/s 00:08:42.361 20:55:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:42.361 20:55:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:42.621 256+0 records in 00:08:42.621 256+0 records out 00:08:42.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14848 s, 7.1 MB/s 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:42.621 256+0 records in 00:08:42.621 256+0 records out 00:08:42.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170908 s, 6.1 MB/s 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.621 20:55:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@51 -- # local i 00:08:42.880 20:55:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.881 20:55:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:43.140 20:55:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:43.140 20:55:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:43.140 20:55:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:43.140 20:55:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.140 20:55:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.140 20:55:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:43.140 20:55:03 -- bdev/nbd_common.sh@41 -- # break 00:08:43.140 20:55:03 -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.140 20:55:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.140 20:55:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:43.412 20:55:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:43.412 20:55:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:43.412 20:55:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:43.412 20:55:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.412 20:55:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.412 20:55:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:43.412 20:55:04 -- bdev/nbd_common.sh@41 -- # break 00:08:43.412 20:55:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.412 20:55:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.413 20:55:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:43.696 20:55:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:43.696 20:55:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:43.696 20:55:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:43.696 20:55:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.696 20:55:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.696 20:55:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:43.696 20:55:04 -- bdev/nbd_common.sh@41 -- # break 00:08:43.696 20:55:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.696 20:55:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.696 20:55:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@41 -- # break 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@41 -- # break 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.960 20:55:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@41 -- # break 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.227 20:55:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@65 -- # true 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@65 -- # count=0 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@104 -- # count=0 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@109 -- # return 0 00:08:44.488 20:55:05 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:44.488 20:55:05 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:44.746 malloc_lvol_verify 00:08:44.746 20:55:05 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:45.005 59c0058b-310e-441c-9e00-c9a98f9f23ec 00:08:45.005 20:55:05 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:45.264 f32218e3-3a3e-4546-b38d-132e12e05d7b 00:08:45.264 20:55:06 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:45.523 /dev/nbd0 00:08:45.523 20:55:06 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:45.523 mke2fs 1.47.0 (5-Feb-2023) 00:08:45.523 Discarding device blocks: 0/4096 done 00:08:45.523 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:45.523 00:08:45.523 Allocating group tables: 0/1 done 00:08:45.523 Writing inode tables: 0/1 done 00:08:45.523 Creating journal (1024 blocks): done 00:08:45.523 Writing superblocks and filesystem accounting information: 0/1 done 00:08:45.523 00:08:45.523 20:55:06 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:45.523 20:55:06 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:45.523 20:55:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.523 20:55:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:45.523 20:55:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:45.523 20:55:06 -- bdev/nbd_common.sh@51 -- # local i 00:08:45.523 20:55:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.523 20:55:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:45.782 20:55:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:45.782 20:55:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:45.782 20:55:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:45.782 20:55:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.782 20:55:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.782 20:55:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:45.782 20:55:06 -- bdev/nbd_common.sh@41 -- # break 00:08:45.782 20:55:06 -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.782 20:55:06 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:45.782 20:55:06 -- bdev/nbd_common.sh@147 -- # return 0 00:08:45.782 20:55:06 -- bdev/blockdev.sh@324 -- # killprocess 60971 00:08:45.782 20:55:06 -- common/autotest_common.sh@936 -- # '[' -z 60971 ']' 00:08:45.782 20:55:06 -- common/autotest_common.sh@940 -- # kill -0 60971 00:08:45.782 20:55:06 -- common/autotest_common.sh@941 -- # uname 00:08:45.782 20:55:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.782 20:55:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60971 00:08:45.782 20:55:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.782 20:55:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.782 killing process with pid 60971 00:08:45.782 20:55:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60971' 00:08:45.782 20:55:06 -- common/autotest_common.sh@955 -- # kill 60971 00:08:45.782 20:55:06 -- common/autotest_common.sh@960 -- # wait 60971 00:08:46.718 20:55:07 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:46.718 00:08:46.718 real 0m11.441s 00:08:46.718 user 0m16.400s 00:08:46.718 sys 0m3.545s 00:08:46.718 20:55:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.718 20:55:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.718 ************************************ 00:08:46.718 END TEST bdev_nbd 00:08:46.718 ************************************ 00:08:46.719 20:55:07 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:46.719 20:55:07 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:08:46.719 skipping fio tests on NVMe due to multi-ns failures. 00:08:46.719 20:55:07 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:46.719 20:55:07 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:46.719 20:55:07 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:46.719 20:55:07 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:46.719 20:55:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.719 20:55:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.719 ************************************ 00:08:46.719 START TEST bdev_verify 00:08:46.719 ************************************ 00:08:46.719 20:55:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:46.719 [2024-12-08 20:55:07.632039] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.719 [2024-12-08 20:55:07.632242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61355 ] 00:08:46.977 [2024-12-08 20:55:07.799205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:46.977 [2024-12-08 20:55:07.941948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.977 [2024-12-08 20:55:07.941968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.543 Running I/O for 5 seconds... 00:08:52.807 00:08:52.807 Latency(us) 00:08:52.807 [2024-12-08T20:55:13.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0x0 length 0xbd0bd 00:08:52.807 Nvme0n1 : 5.04 2854.58 11.15 0.00 0.00 44732.45 6494.02 50045.67 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:52.807 Nvme0n1 : 5.04 2853.30 11.15 0.00 0.00 44739.28 8162.21 56241.80 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0x0 length 0xa0000 00:08:52.807 Nvme1n1 : 5.04 2853.79 11.15 0.00 0.00 44708.59 6523.81 48139.17 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0xa0000 length 0xa0000 00:08:52.807 Nvme1n1 : 5.04 2852.32 11.14 0.00 0.00 44716.39 8519.68 54096.99 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0x0 length 0x80000 00:08:52.807 Nvme2n1 : 5.04 2852.95 11.14 0.00 0.00 44673.67 6851.49 44564.48 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0x80000 length 0x80000 00:08:52.807 Nvme2n1 : 5.05 2855.99 11.16 0.00 0.00 44547.24 2934.23 42419.67 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0x0 length 0x80000 00:08:52.807 Nvme2n2 : 5.05 2856.96 11.16 0.00 0.00 44558.92 2353.34 40751.48 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0x80000 length 0x80000 00:08:52.807 Nvme2n2 : 5.05 2854.82 11.15 0.00 0.00 44503.02 4081.11 42896.29 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0x0 length 0x80000 00:08:52.807 Nvme2n3 : 5.05 2856.19 11.16 0.00 0.00 44530.30 2874.65 40751.48 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0x80000 length 0x80000 00:08:52.807 Nvme2n3 : 5.05 2860.90 11.18 0.00 0.00 44378.34 2219.29 43372.92 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0x0 length 0x20000 00:08:52.807 Nvme3n1 : 5.05 2854.96 11.15 0.00 0.00 44502.78 4021.53 40513.16 00:08:52.807 [2024-12-08T20:55:13.850Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:52.807 Verification LBA range: start 0x20000 length 0x20000 00:08:52.807 Nvme3n1 : 5.05 2860.35 11.17 0.00 0.00 44355.98 2338.44 43372.92 00:08:52.807 [2024-12-08T20:55:13.850Z] =================================================================================================================== 00:08:52.807 [2024-12-08T20:55:13.850Z] Total : 34267.11 133.86 0.00 0.00 44578.68 2219.29 56241.80 00:09:02.781 00:09:02.781 real 0m15.216s 00:09:02.781 user 0m29.180s 00:09:02.781 sys 0m0.314s 00:09:02.781 20:55:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.781 20:55:22 -- common/autotest_common.sh@10 -- # set +x 00:09:02.781 ************************************ 00:09:02.781 END TEST bdev_verify 00:09:02.781 ************************************ 00:09:02.781 20:55:22 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:02.781 20:55:22 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:02.781 20:55:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.781 20:55:22 -- common/autotest_common.sh@10 -- # set +x 00:09:02.781 ************************************ 00:09:02.781 START TEST bdev_verify_big_io 00:09:02.781 ************************************ 00:09:02.781 20:55:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:02.781 [2024-12-08 20:55:22.907804] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:02.781 [2024-12-08 20:55:22.907964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61542 ] 00:09:02.781 [2024-12-08 20:55:23.076654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.781 [2024-12-08 20:55:23.223020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.781 [2024-12-08 20:55:23.223028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.040 Running I/O for 5 seconds... 00:09:08.316 00:09:08.316 Latency(us) 00:09:08.316 [2024-12-08T20:55:29.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0x0 length 0xbd0b 00:09:08.316 Nvme0n1 : 5.31 295.01 18.44 0.00 0.00 426018.37 45994.36 579576.55 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:08.316 Nvme0n1 : 5.33 301.78 18.86 0.00 0.00 416008.47 46470.98 606267.58 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0x0 length 0xa000 00:09:08.316 Nvme1n1 : 5.31 294.90 18.43 0.00 0.00 420612.15 45756.04 526194.50 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0xa000 length 0xa000 00:09:08.316 Nvme1n1 : 5.34 301.68 18.86 0.00 0.00 411116.69 46709.29 564324.54 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0x0 length 0x8000 00:09:08.316 Nvme2n1 : 5.34 301.55 18.85 0.00 0.00 408028.54 22878.02 476625.45 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0x8000 length 0x8000 00:09:08.316 Nvme2n1 : 5.34 301.59 18.85 0.00 0.00 406571.43 46470.98 522381.50 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0x0 length 0x8000 00:09:08.316 Nvme2n2 : 5.34 301.41 18.84 0.00 0.00 403022.12 23473.80 428962.91 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0x8000 length 0x8000 00:09:08.316 Nvme2n2 : 5.35 310.35 19.40 0.00 0.00 394068.29 3961.95 484251.46 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0x0 length 0x8000 00:09:08.316 Nvme2n3 : 5.35 309.81 19.36 0.00 0.00 388678.56 10545.34 478531.96 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0x8000 length 0x8000 00:09:08.316 Nvme2n3 : 5.35 318.99 19.94 0.00 0.00 380196.13 3485.32 444214.92 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0x0 length 0x2000 00:09:08.316 Nvme3n1 : 5.37 325.62 20.35 0.00 0.00 366040.44 5749.29 480438.46 00:09:08.316 [2024-12-08T20:55:29.359Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.316 Verification LBA range: start 0x2000 length 0x2000 00:09:08.316 Nvme3n1 : 5.35 318.89 19.93 0.00 0.00 375872.81 3604.48 402271.88 00:09:08.316 [2024-12-08T20:55:29.359Z] =================================================================================================================== 00:09:08.316 [2024-12-08T20:55:29.359Z] Total : 3681.59 230.10 0.00 0.00 399091.67 3485.32 606267.58 00:09:09.696 00:09:09.696 real 0m7.813s 00:09:09.696 user 0m14.445s 00:09:09.696 sys 0m0.267s 00:09:09.696 20:55:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:09.696 20:55:30 -- common/autotest_common.sh@10 -- # set +x 00:09:09.696 ************************************ 00:09:09.696 END TEST bdev_verify_big_io 00:09:09.696 ************************************ 00:09:09.696 20:55:30 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.696 20:55:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:09.696 20:55:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.696 20:55:30 -- common/autotest_common.sh@10 -- # set +x 00:09:09.696 ************************************ 00:09:09.696 START TEST bdev_write_zeroes 00:09:09.696 ************************************ 00:09:09.696 20:55:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.956 [2024-12-08 20:55:30.769913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:09.956 [2024-12-08 20:55:30.770103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61646 ] 00:09:09.956 [2024-12-08 20:55:30.939375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.215 [2024-12-08 20:55:31.092345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.784 Running I/O for 1 seconds... 00:09:11.720 00:09:11.720 Latency(us) 00:09:11.720 [2024-12-08T20:55:32.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.720 [2024-12-08T20:55:32.763Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.720 Nvme0n1 : 1.01 9213.47 35.99 0.00 0.00 13848.21 9949.56 26810.18 00:09:11.720 [2024-12-08T20:55:32.763Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.720 Nvme1n1 : 1.02 9201.94 35.95 0.00 0.00 13845.16 10545.34 27525.12 00:09:11.720 [2024-12-08T20:55:32.763Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.720 Nvme2n1 : 1.02 9231.86 36.06 0.00 0.00 13742.11 8162.21 23116.33 00:09:11.720 [2024-12-08T20:55:32.763Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.720 Nvme2n2 : 1.02 9221.06 36.02 0.00 0.00 13737.06 8162.21 23116.33 00:09:11.720 [2024-12-08T20:55:32.763Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.720 Nvme2n3 : 1.02 9260.68 36.17 0.00 0.00 13615.71 5272.67 18707.55 00:09:11.720 [2024-12-08T20:55:32.763Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.720 Nvme3n1 : 1.02 9251.08 36.14 0.00 0.00 13590.30 5272.67 16801.05 00:09:11.720 [2024-12-08T20:55:32.763Z] =================================================================================================================== 00:09:11.720 [2024-12-08T20:55:32.763Z] Total : 55380.10 216.33 0.00 0.00 13729.21 5272.67 27525.12 00:09:12.659 00:09:12.659 real 0m2.886s 00:09:12.659 user 0m2.556s 00:09:12.659 sys 0m0.211s 00:09:12.659 20:55:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.659 20:55:33 -- common/autotest_common.sh@10 -- # set +x 00:09:12.659 ************************************ 00:09:12.659 END TEST bdev_write_zeroes 00:09:12.659 ************************************ 00:09:12.659 20:55:33 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:12.659 20:55:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:12.659 20:55:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.659 20:55:33 -- common/autotest_common.sh@10 -- # set +x 00:09:12.659 ************************************ 00:09:12.659 START TEST bdev_json_nonenclosed 00:09:12.659 ************************************ 00:09:12.659 20:55:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:12.659 [2024-12-08 20:55:33.676720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:12.659 [2024-12-08 20:55:33.676852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61699 ] 00:09:12.918 [2024-12-08 20:55:33.830468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.177 [2024-12-08 20:55:33.973663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.177 [2024-12-08 20:55:33.973882] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:13.177 [2024-12-08 20:55:33.973909] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:13.437 00:09:13.437 real 0m0.668s 00:09:13.437 user 0m0.464s 00:09:13.437 sys 0m0.100s 00:09:13.437 20:55:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.437 20:55:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.437 ************************************ 00:09:13.437 END TEST bdev_json_nonenclosed 00:09:13.437 ************************************ 00:09:13.437 20:55:34 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:13.437 20:55:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:13.437 20:55:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.437 20:55:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.437 ************************************ 00:09:13.437 START TEST bdev_json_nonarray 00:09:13.437 ************************************ 00:09:13.437 20:55:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:13.437 [2024-12-08 20:55:34.395309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:13.437 [2024-12-08 20:55:34.395440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61725 ] 00:09:13.697 [2024-12-08 20:55:34.547748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.697 [2024-12-08 20:55:34.690833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.697 [2024-12-08 20:55:34.691042] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:13.697 [2024-12-08 20:55:34.691070] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:13.956 00:09:13.956 real 0m0.660s 00:09:13.956 user 0m0.445s 00:09:13.956 sys 0m0.110s 00:09:13.956 20:55:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.956 20:55:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.956 ************************************ 00:09:13.956 END TEST bdev_json_nonarray 00:09:13.956 ************************************ 00:09:14.220 20:55:35 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:09:14.220 20:55:35 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:09:14.220 20:55:35 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:09:14.220 20:55:35 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:14.220 20:55:35 -- bdev/blockdev.sh@809 -- # cleanup 00:09:14.220 20:55:35 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:14.220 20:55:35 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:14.220 20:55:35 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:09:14.220 20:55:35 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:09:14.220 20:55:35 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:09:14.220 20:55:35 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:09:14.220 00:09:14.220 real 0m47.258s 00:09:14.220 user 1m15.406s 00:09:14.220 sys 0m5.880s 00:09:14.220 20:55:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:14.220 20:55:35 -- common/autotest_common.sh@10 -- # set +x 00:09:14.220 ************************************ 00:09:14.220 END TEST blockdev_nvme 00:09:14.220 ************************************ 00:09:14.220 20:55:35 -- spdk/autotest.sh@206 -- # uname -s 00:09:14.220 20:55:35 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:09:14.220 20:55:35 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:14.220 20:55:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:14.220 20:55:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.220 20:55:35 -- common/autotest_common.sh@10 -- # set +x 00:09:14.220 ************************************ 00:09:14.220 START TEST blockdev_nvme_gpt 00:09:14.220 ************************************ 00:09:14.221 20:55:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:14.221 * Looking for test storage... 00:09:14.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:14.221 20:55:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:14.221 20:55:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:14.221 20:55:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:14.221 20:55:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:14.221 20:55:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:14.221 20:55:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:14.221 20:55:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:14.221 20:55:35 -- scripts/common.sh@335 -- # IFS=.-: 00:09:14.221 20:55:35 -- scripts/common.sh@335 -- # read -ra ver1 00:09:14.221 20:55:35 -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.221 20:55:35 -- scripts/common.sh@336 -- # read -ra ver2 00:09:14.221 20:55:35 -- scripts/common.sh@337 -- # local 'op=<' 00:09:14.221 20:55:35 -- scripts/common.sh@339 -- # ver1_l=2 00:09:14.221 20:55:35 -- scripts/common.sh@340 -- # ver2_l=1 00:09:14.221 20:55:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:14.221 20:55:35 -- scripts/common.sh@343 -- # case "$op" in 00:09:14.221 20:55:35 -- scripts/common.sh@344 -- # : 1 00:09:14.221 20:55:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:14.221 20:55:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.221 20:55:35 -- scripts/common.sh@364 -- # decimal 1 00:09:14.221 20:55:35 -- scripts/common.sh@352 -- # local d=1 00:09:14.221 20:55:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.221 20:55:35 -- scripts/common.sh@354 -- # echo 1 00:09:14.221 20:55:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:14.221 20:55:35 -- scripts/common.sh@365 -- # decimal 2 00:09:14.221 20:55:35 -- scripts/common.sh@352 -- # local d=2 00:09:14.221 20:55:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.482 20:55:35 -- scripts/common.sh@354 -- # echo 2 00:09:14.482 20:55:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:14.482 20:55:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:14.482 20:55:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:14.482 20:55:35 -- scripts/common.sh@367 -- # return 0 00:09:14.482 20:55:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.482 20:55:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:14.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.482 --rc genhtml_branch_coverage=1 00:09:14.482 --rc genhtml_function_coverage=1 00:09:14.482 --rc genhtml_legend=1 00:09:14.482 --rc geninfo_all_blocks=1 00:09:14.482 --rc geninfo_unexecuted_blocks=1 00:09:14.482 00:09:14.482 ' 00:09:14.482 20:55:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:14.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.482 --rc genhtml_branch_coverage=1 00:09:14.482 --rc genhtml_function_coverage=1 00:09:14.482 --rc genhtml_legend=1 00:09:14.482 --rc geninfo_all_blocks=1 00:09:14.482 --rc geninfo_unexecuted_blocks=1 00:09:14.482 00:09:14.482 ' 00:09:14.482 20:55:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:14.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.482 --rc genhtml_branch_coverage=1 00:09:14.482 --rc genhtml_function_coverage=1 00:09:14.482 --rc genhtml_legend=1 00:09:14.482 --rc geninfo_all_blocks=1 00:09:14.482 --rc geninfo_unexecuted_blocks=1 00:09:14.482 00:09:14.482 ' 00:09:14.482 20:55:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:14.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.482 --rc genhtml_branch_coverage=1 00:09:14.482 --rc genhtml_function_coverage=1 00:09:14.482 --rc genhtml_legend=1 00:09:14.482 --rc geninfo_all_blocks=1 00:09:14.482 --rc geninfo_unexecuted_blocks=1 00:09:14.482 00:09:14.482 ' 00:09:14.482 20:55:35 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:14.482 20:55:35 -- bdev/nbd_common.sh@6 -- # set -e 00:09:14.482 20:55:35 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:14.482 20:55:35 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:14.482 20:55:35 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:14.482 20:55:35 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:14.482 20:55:35 -- bdev/blockdev.sh@18 -- # : 00:09:14.482 20:55:35 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:14.482 20:55:35 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:14.482 20:55:35 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:14.482 20:55:35 -- bdev/blockdev.sh@672 -- # uname -s 00:09:14.482 20:55:35 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:09:14.482 20:55:35 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:09:14.482 20:55:35 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:09:14.482 20:55:35 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:14.482 20:55:35 -- bdev/blockdev.sh@682 -- # dek= 00:09:14.482 20:55:35 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:14.482 20:55:35 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:14.482 20:55:35 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:14.482 20:55:35 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:09:14.482 20:55:35 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:09:14.482 20:55:35 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:14.482 20:55:35 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61808 00:09:14.482 20:55:35 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:14.482 20:55:35 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:14.482 20:55:35 -- bdev/blockdev.sh@47 -- # waitforlisten 61808 00:09:14.482 20:55:35 -- common/autotest_common.sh@829 -- # '[' -z 61808 ']' 00:09:14.482 20:55:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.482 20:55:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.482 20:55:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.482 20:55:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.482 20:55:35 -- common/autotest_common.sh@10 -- # set +x 00:09:14.482 [2024-12-08 20:55:35.360491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:14.482 [2024-12-08 20:55:35.360657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61808 ] 00:09:14.482 [2024-12-08 20:55:35.508253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.741 [2024-12-08 20:55:35.666103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:14.741 [2024-12-08 20:55:35.666334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.309 20:55:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.309 20:55:36 -- common/autotest_common.sh@862 -- # return 0 00:09:15.309 20:55:36 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:09:15.309 20:55:36 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:09:15.309 20:55:36 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:15.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:15.877 Waiting for block devices as requested 00:09:15.877 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:16.135 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:16.135 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:16.135 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.404 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:21.404 20:55:42 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:09:21.404 20:55:42 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:09:21.404 20:55:42 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:09:21.404 20:55:42 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:09:21.404 20:55:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:21.404 20:55:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:09:21.404 20:55:42 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:09:21.404 20:55:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:21.404 20:55:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:09:21.404 20:55:42 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:09:21.404 20:55:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:21.404 20:55:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:09:21.404 20:55:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:09:21.404 20:55:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:21.404 20:55:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:09:21.404 20:55:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:09:21.404 20:55:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:21.404 20:55:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:09:21.404 20:55:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:09:21.404 20:55:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:21.404 20:55:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:09:21.404 20:55:42 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:09:21.404 20:55:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:21.404 20:55:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:09:21.404 20:55:42 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:09:21.404 20:55:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:21.404 20:55:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:21.404 20:55:42 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:09:21.404 20:55:42 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:09:21.404 20:55:42 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:09:21.404 20:55:42 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:21.404 20:55:42 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:09:21.404 20:55:42 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:09:21.404 20:55:42 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:09:21.404 20:55:42 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:09:21.404 BYT; 00:09:21.404 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:21.404 20:55:42 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:09:21.404 BYT; 00:09:21.404 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:21.404 20:55:42 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:09:21.404 20:55:42 -- bdev/blockdev.sh@114 -- # break 00:09:21.404 20:55:42 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:09:21.404 20:55:42 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:21.404 20:55:42 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:21.404 20:55:42 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:21.404 20:55:42 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:09:21.404 20:55:42 -- scripts/common.sh@410 -- # local spdk_guid 00:09:21.404 20:55:42 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:21.404 20:55:42 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:21.404 20:55:42 -- scripts/common.sh@415 -- # IFS='()' 00:09:21.404 20:55:42 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:09:21.404 20:55:42 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:21.404 20:55:42 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:21.404 20:55:42 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:21.404 20:55:42 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:21.404 20:55:42 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:21.404 20:55:42 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:09:21.404 20:55:42 -- scripts/common.sh@422 -- # local spdk_guid 00:09:21.404 20:55:42 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:21.404 20:55:42 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:21.404 20:55:42 -- scripts/common.sh@427 -- # IFS='()' 00:09:21.404 20:55:42 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:09:21.404 20:55:42 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:21.404 20:55:42 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:21.404 20:55:42 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:21.404 20:55:42 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:21.404 20:55:42 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:21.404 20:55:42 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:09:22.342 The operation has completed successfully. 00:09:22.342 20:55:43 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:09:23.740 The operation has completed successfully. 00:09:23.740 20:55:44 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:24.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.566 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.566 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.566 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.566 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.566 20:55:45 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:09:24.566 20:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.566 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:24.566 [] 00:09:24.566 20:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.566 20:55:45 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:09:24.566 20:55:45 -- bdev/blockdev.sh@79 -- # local json 00:09:24.566 20:55:45 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:24.566 20:55:45 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:24.825 20:55:45 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:24.825 20:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.825 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:25.084 20:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.084 20:55:45 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:25.084 20:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.084 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:25.084 20:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.084 20:55:45 -- bdev/blockdev.sh@738 -- # cat 00:09:25.084 20:55:45 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:25.084 20:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.084 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:25.084 20:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.084 20:55:45 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:25.084 20:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.084 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:25.084 20:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.084 20:55:45 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:25.084 20:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.084 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:25.084 20:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.084 20:55:45 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:25.084 20:55:45 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:25.084 20:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.084 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:25.084 20:55:45 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:25.084 20:55:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.084 20:55:46 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:25.084 20:55:46 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:25.085 20:55:46 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1b9685bc-1123-4d0e-931d-efde4ca1c055"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1b9685bc-1123-4d0e-931d-efde4ca1c055",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e6633a34-25d2-47c3-8159-b51daa55264f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e6633a34-25d2-47c3-8159-b51daa55264f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "316f1b23-d0f6-4aa2-9473-bf5a71f83db9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "316f1b23-d0f6-4aa2-9473-bf5a71f83db9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f5dbc051-58ac-4879-805e-2eb9889b86db"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f5dbc051-58ac-4879-805e-2eb9889b86db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "96e58fee-da91-4be5-bc58-e9342952343d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "96e58fee-da91-4be5-bc58-e9342952343d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:25.344 20:55:46 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:25.344 20:55:46 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:09:25.344 20:55:46 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:25.344 20:55:46 -- bdev/blockdev.sh@752 -- # killprocess 61808 00:09:25.344 20:55:46 -- common/autotest_common.sh@936 -- # '[' -z 61808 ']' 00:09:25.344 20:55:46 -- common/autotest_common.sh@940 -- # kill -0 61808 00:09:25.344 20:55:46 -- common/autotest_common.sh@941 -- # uname 00:09:25.344 20:55:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:25.344 20:55:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61808 00:09:25.344 20:55:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:25.344 20:55:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:25.344 killing process with pid 61808 00:09:25.344 20:55:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61808' 00:09:25.344 20:55:46 -- common/autotest_common.sh@955 -- # kill 61808 00:09:25.344 20:55:46 -- common/autotest_common.sh@960 -- # wait 61808 00:09:26.723 20:55:47 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:26.723 20:55:47 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:26.723 20:55:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:26.723 20:55:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.723 20:55:47 -- common/autotest_common.sh@10 -- # set +x 00:09:26.994 ************************************ 00:09:26.994 START TEST bdev_hello_world 00:09:26.994 ************************************ 00:09:26.994 20:55:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:26.994 [2024-12-08 20:55:47.841645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:26.994 [2024-12-08 20:55:47.841784] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62461 ] 00:09:26.994 [2024-12-08 20:55:47.992305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.272 [2024-12-08 20:55:48.137951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.849 [2024-12-08 20:55:48.666278] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:27.849 [2024-12-08 20:55:48.666325] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:27.849 [2024-12-08 20:55:48.666363] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:27.849 [2024-12-08 20:55:48.668838] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:27.849 [2024-12-08 20:55:48.669432] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:27.849 [2024-12-08 20:55:48.669485] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:27.849 [2024-12-08 20:55:48.669822] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:27.849 00:09:27.849 [2024-12-08 20:55:48.669859] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:28.785 00:09:28.785 real 0m1.739s 00:09:28.785 user 0m1.444s 00:09:28.785 sys 0m0.188s 00:09:28.785 20:55:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.785 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:28.785 ************************************ 00:09:28.785 END TEST bdev_hello_world 00:09:28.785 ************************************ 00:09:28.785 20:55:49 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:28.785 20:55:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:28.785 20:55:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.785 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:28.786 ************************************ 00:09:28.786 START TEST bdev_bounds 00:09:28.786 ************************************ 00:09:28.786 20:55:49 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:09:28.786 20:55:49 -- bdev/blockdev.sh@288 -- # bdevio_pid=62503 00:09:28.786 20:55:49 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:28.786 20:55:49 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:28.786 Process bdevio pid: 62503 00:09:28.786 20:55:49 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 62503' 00:09:28.786 20:55:49 -- bdev/blockdev.sh@291 -- # waitforlisten 62503 00:09:28.786 20:55:49 -- common/autotest_common.sh@829 -- # '[' -z 62503 ']' 00:09:28.786 20:55:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.786 20:55:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.786 20:55:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.786 20:55:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.786 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:28.786 [2024-12-08 20:55:49.662870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:28.786 [2024-12-08 20:55:49.663058] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62503 ] 00:09:29.044 [2024-12-08 20:55:49.830076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:29.044 [2024-12-08 20:55:49.973291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.044 [2024-12-08 20:55:49.973397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.044 [2024-12-08 20:55:49.973408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.420 20:55:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.420 20:55:51 -- common/autotest_common.sh@862 -- # return 0 00:09:30.420 20:55:51 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:30.420 I/O targets: 00:09:30.420 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:30.420 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:30.420 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:30.420 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:30.420 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:30.420 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:30.420 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:30.420 00:09:30.420 00:09:30.420 CUnit - A unit testing framework for C - Version 2.1-3 00:09:30.420 http://cunit.sourceforge.net/ 00:09:30.420 00:09:30.420 00:09:30.420 Suite: bdevio tests on: Nvme3n1 00:09:30.420 Test: blockdev write read block ...passed 00:09:30.420 Test: blockdev write zeroes read block ...passed 00:09:30.420 Test: blockdev write zeroes read no split ...passed 00:09:30.420 Test: blockdev write zeroes read split ...passed 00:09:30.420 Test: blockdev write zeroes read split partial ...passed 00:09:30.420 Test: blockdev reset ...[2024-12-08 20:55:51.421493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:30.420 [2024-12-08 20:55:51.425050] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:30.420 passed 00:09:30.420 Test: blockdev write read 8 blocks ...passed 00:09:30.420 Test: blockdev write read size > 128k ...passed 00:09:30.420 Test: blockdev write read invalid size ...passed 00:09:30.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.420 Test: blockdev write read max offset ...passed 00:09:30.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.420 Test: blockdev writev readv 8 blocks ...passed 00:09:30.420 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.420 Test: blockdev writev readv block ...passed 00:09:30.420 Test: blockdev writev readv size > 128k ...passed 00:09:30.420 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.420 Test: blockdev comparev and writev ...[2024-12-08 20:55:51.434055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27240a000 len:0x1000 00:09:30.420 [2024-12-08 20:55:51.434138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:30.420 passed 00:09:30.420 Test: blockdev nvme passthru rw ...passed 00:09:30.420 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.420 Test: blockdev nvme admin passthru ...[2024-12-08 20:55:51.434913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:30.420 [2024-12-08 20:55:51.434968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:30.420 passed 00:09:30.420 Test: blockdev copy ...passed 00:09:30.420 Suite: bdevio tests on: Nvme2n3 00:09:30.420 Test: blockdev write read block ...passed 00:09:30.420 Test: blockdev write zeroes read block ...passed 00:09:30.420 Test: blockdev write zeroes read no split ...passed 00:09:30.678 Test: blockdev write zeroes read split ...passed 00:09:30.678 Test: blockdev write zeroes read split partial ...passed 00:09:30.678 Test: blockdev reset ...[2024-12-08 20:55:51.499605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:30.678 [2024-12-08 20:55:51.503266] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:30.678 passed 00:09:30.678 Test: blockdev write read 8 blocks ...passed 00:09:30.678 Test: blockdev write read size > 128k ...passed 00:09:30.678 Test: blockdev write read invalid size ...passed 00:09:30.678 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.678 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.678 Test: blockdev write read max offset ...passed 00:09:30.678 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.678 Test: blockdev writev readv 8 blocks ...passed 00:09:30.678 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.678 Test: blockdev writev readv block ...passed 00:09:30.678 Test: blockdev writev readv size > 128k ...passed 00:09:30.678 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.678 Test: blockdev comparev and writev ...[2024-12-08 20:55:51.512132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x270704000 len:0x1000 00:09:30.678 passed 00:09:30.678 Test: blockdev nvme passthru rw ...[2024-12-08 20:55:51.512233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:30.678 passed 00:09:30.678 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.678 Test: blockdev nvme admin passthru ...[2024-12-08 20:55:51.513106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:30.678 [2024-12-08 20:55:51.513165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:30.678 passed 00:09:30.678 Test: blockdev copy ...passed 00:09:30.678 Suite: bdevio tests on: Nvme2n2 00:09:30.678 Test: blockdev write read block ...passed 00:09:30.678 Test: blockdev write zeroes read block ...passed 00:09:30.678 Test: blockdev write zeroes read no split ...passed 00:09:30.678 Test: blockdev write zeroes read split ...passed 00:09:30.678 Test: blockdev write zeroes read split partial ...passed 00:09:30.678 Test: blockdev reset ...[2024-12-08 20:55:51.592342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:30.678 [2024-12-08 20:55:51.595855] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:30.678 passed 00:09:30.678 Test: blockdev write read 8 blocks ...passed 00:09:30.678 Test: blockdev write read size > 128k ...passed 00:09:30.678 Test: blockdev write read invalid size ...passed 00:09:30.679 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.679 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.679 Test: blockdev write read max offset ...passed 00:09:30.679 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.679 Test: blockdev writev readv 8 blocks ...passed 00:09:30.679 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.679 Test: blockdev writev readv block ...passed 00:09:30.679 Test: blockdev writev readv size > 128k ...passed 00:09:30.679 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.679 Test: blockdev comparev and writev ...[2024-12-08 20:55:51.605266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x270704000 len:0x1000 00:09:30.679 [2024-12-08 20:55:51.605332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:30.679 passed 00:09:30.679 Test: blockdev nvme passthru rw ...passed 00:09:30.679 Test: blockdev nvme passthru vendor specific ...[2024-12-08 20:55:51.606121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:30.679 [2024-12-08 20:55:51.606192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:30.679 passed 00:09:30.679 Test: blockdev nvme admin passthru ...passed 00:09:30.679 Test: blockdev copy ...passed 00:09:30.679 Suite: bdevio tests on: Nvme2n1 00:09:30.679 Test: blockdev write read block ...passed 00:09:30.679 Test: blockdev write zeroes read block ...passed 00:09:30.679 Test: blockdev write zeroes read no split ...passed 00:09:30.679 Test: blockdev write zeroes read split ...passed 00:09:30.679 Test: blockdev write zeroes read split partial ...passed 00:09:30.679 Test: blockdev reset ...[2024-12-08 20:55:51.678258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:30.679 passed 00:09:30.679 Test: blockdev write read 8 blocks ...[2024-12-08 20:55:51.681632] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:30.679 passed 00:09:30.679 Test: blockdev write read size > 128k ...passed 00:09:30.679 Test: blockdev write read invalid size ...passed 00:09:30.679 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.679 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.679 Test: blockdev write read max offset ...passed 00:09:30.679 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.679 Test: blockdev writev readv 8 blocks ...passed 00:09:30.679 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.679 Test: blockdev writev readv block ...passed 00:09:30.679 Test: blockdev writev readv size > 128k ...passed 00:09:30.679 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.679 Test: blockdev comparev and writev ...[2024-12-08 20:55:51.690379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28de3c000 len:0x1000 00:09:30.679 [2024-12-08 20:55:51.690450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:30.679 passed 00:09:30.679 Test: blockdev nvme passthru rw ...passed 00:09:30.679 Test: blockdev nvme passthru vendor specific ...[2024-12-08 20:55:51.691426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:30.679 passed 00:09:30.679 Test: blockdev nvme admin passthru ...[2024-12-08 20:55:51.691497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:30.679 passed 00:09:30.679 Test: blockdev copy ...passed 00:09:30.679 Suite: bdevio tests on: Nvme1n1 00:09:30.679 Test: blockdev write read block ...passed 00:09:30.679 Test: blockdev write zeroes read block ...passed 00:09:30.679 Test: blockdev write zeroes read no split ...passed 00:09:30.937 Test: blockdev write zeroes read split ...passed 00:09:30.937 Test: blockdev write zeroes read split partial ...passed 00:09:30.937 Test: blockdev reset ...[2024-12-08 20:55:51.775766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:30.937 [2024-12-08 20:55:51.779561] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:30.937 passed 00:09:30.937 Test: blockdev write read 8 blocks ...passed 00:09:30.937 Test: blockdev write read size > 128k ...passed 00:09:30.937 Test: blockdev write read invalid size ...passed 00:09:30.937 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.937 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.937 Test: blockdev write read max offset ...passed 00:09:30.937 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.937 Test: blockdev writev readv 8 blocks ...passed 00:09:30.937 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.937 Test: blockdev writev readv block ...passed 00:09:30.937 Test: blockdev writev readv size > 128k ...passed 00:09:30.937 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.937 Test: blockdev comparev and writev ...[2024-12-08 20:55:51.789079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28de38000 len:0x1000 00:09:30.937 [2024-12-08 20:55:51.789174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:30.937 passed 00:09:30.937 Test: blockdev nvme passthru rw ...passed 00:09:30.937 Test: blockdev nvme passthru vendor specific ...[2024-12-08 20:55:51.790327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:30.937 [2024-12-08 20:55:51.790366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:30.937 passed 00:09:30.937 Test: blockdev nvme admin passthru ...passed 00:09:30.937 Test: blockdev copy ...passed 00:09:30.938 Suite: bdevio tests on: Nvme0n1p2 00:09:30.938 Test: blockdev write read block ...passed 00:09:30.938 Test: blockdev write zeroes read block ...passed 00:09:30.938 Test: blockdev write zeroes read no split ...passed 00:09:30.938 Test: blockdev write zeroes read split ...passed 00:09:30.938 Test: blockdev write zeroes read split partial ...passed 00:09:30.938 Test: blockdev reset ...[2024-12-08 20:55:51.862980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:30.938 passed 00:09:30.938 Test: blockdev write read 8 blocks ...[2024-12-08 20:55:51.866198] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:30.938 passed 00:09:30.938 Test: blockdev write read size > 128k ...passed 00:09:30.938 Test: blockdev write read invalid size ...passed 00:09:30.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.938 Test: blockdev write read max offset ...passed 00:09:30.938 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.938 Test: blockdev writev readv 8 blocks ...passed 00:09:30.938 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.938 Test: blockdev writev readv block ...passed 00:09:30.938 Test: blockdev writev readv size > 128k ...passed 00:09:30.938 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.938 Test: blockdev comparev and writev ...passed 00:09:30.938 Test: blockdev nvme passthru rw ...passed 00:09:30.938 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.938 Test: blockdev nvme admin passthru ...passed 00:09:30.938 Test: blockdev copy ...[2024-12-08 20:55:51.874619] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:30.938 separate metadata which is not supported yet. 00:09:30.938 passed 00:09:30.938 Suite: bdevio tests on: Nvme0n1p1 00:09:30.938 Test: blockdev write read block ...passed 00:09:30.938 Test: blockdev write zeroes read block ...passed 00:09:30.938 Test: blockdev write zeroes read no split ...passed 00:09:30.938 Test: blockdev write zeroes read split ...passed 00:09:30.938 Test: blockdev write zeroes read split partial ...passed 00:09:30.938 Test: blockdev reset ...[2024-12-08 20:55:51.942357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:30.938 [2024-12-08 20:55:51.945541] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:30.938 passed 00:09:30.938 Test: blockdev write read 8 blocks ...passed 00:09:30.938 Test: blockdev write read size > 128k ...passed 00:09:30.938 Test: blockdev write read invalid size ...passed 00:09:30.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.938 Test: blockdev write read max offset ...passed 00:09:30.938 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.938 Test: blockdev writev readv 8 blocks ...passed 00:09:30.938 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.938 Test: blockdev writev readv block ...passed 00:09:30.938 Test: blockdev writev readv size > 128k ...passed 00:09:30.938 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.938 Test: blockdev comparev and writev ...passed 00:09:30.938 Test: blockdev nvme passthru rw ...passed 00:09:30.938 Test: blockdev nvme passthru vendor specific ...passed 00:09:30.938 Test: blockdev nvme admin passthru ...passed 00:09:30.938 Test: blockdev copy ...[2024-12-08 20:55:51.954355] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:30.938 separate metadata which is not supported yet. 00:09:30.938 passed 00:09:30.938 00:09:30.938 Run Summary: Type Total Ran Passed Failed Inactive 00:09:30.938 suites 7 7 n/a 0 0 00:09:30.938 tests 161 161 161 0 0 00:09:30.938 asserts 1006 1006 1006 0 n/a 00:09:30.938 00:09:30.938 Elapsed time = 1.595 seconds 00:09:30.938 0 00:09:30.938 20:55:51 -- bdev/blockdev.sh@293 -- # killprocess 62503 00:09:30.938 20:55:51 -- common/autotest_common.sh@936 -- # '[' -z 62503 ']' 00:09:30.938 20:55:51 -- common/autotest_common.sh@940 -- # kill -0 62503 00:09:30.938 20:55:51 -- common/autotest_common.sh@941 -- # uname 00:09:30.938 20:55:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:31.196 20:55:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62503 00:09:31.196 killing process with pid 62503 00:09:31.196 20:55:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:31.196 20:55:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:31.196 20:55:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62503' 00:09:31.196 20:55:52 -- common/autotest_common.sh@955 -- # kill 62503 00:09:31.196 20:55:52 -- common/autotest_common.sh@960 -- # wait 62503 00:09:31.762 ************************************ 00:09:31.762 END TEST bdev_bounds 00:09:31.762 ************************************ 00:09:31.762 20:55:52 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:31.762 00:09:31.762 real 0m3.202s 00:09:31.762 user 0m8.527s 00:09:31.762 sys 0m0.384s 00:09:31.762 20:55:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.762 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:09:32.021 20:55:52 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:32.021 20:55:52 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:32.021 20:55:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.021 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:09:32.021 ************************************ 00:09:32.021 START TEST bdev_nbd 00:09:32.021 ************************************ 00:09:32.021 20:55:52 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:32.021 20:55:52 -- bdev/blockdev.sh@298 -- # uname -s 00:09:32.021 20:55:52 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:32.021 20:55:52 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.021 20:55:52 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:32.021 20:55:52 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:32.021 20:55:52 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:32.021 20:55:52 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:09:32.021 20:55:52 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:32.021 20:55:52 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:32.021 20:55:52 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:32.021 20:55:52 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:09:32.021 20:55:52 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:32.021 20:55:52 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:32.021 20:55:52 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:32.021 20:55:52 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:32.021 20:55:52 -- bdev/blockdev.sh@316 -- # nbd_pid=62572 00:09:32.021 20:55:52 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:32.021 20:55:52 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:32.021 20:55:52 -- bdev/blockdev.sh@318 -- # waitforlisten 62572 /var/tmp/spdk-nbd.sock 00:09:32.021 20:55:52 -- common/autotest_common.sh@829 -- # '[' -z 62572 ']' 00:09:32.021 20:55:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:32.021 20:55:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:32.021 20:55:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:32.021 20:55:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.021 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:09:32.021 [2024-12-08 20:55:52.963682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:32.021 [2024-12-08 20:55:52.964872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.278 [2024-12-08 20:55:53.143166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.278 [2024-12-08 20:55:53.287096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.844 20:55:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.844 20:55:53 -- common/autotest_common.sh@862 -- # return 0 00:09:32.844 20:55:53 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@24 -- # local i 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:32.844 20:55:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:33.101 20:55:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:33.101 20:55:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:33.101 20:55:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:33.101 20:55:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:33.101 20:55:54 -- common/autotest_common.sh@867 -- # local i 00:09:33.101 20:55:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:33.101 20:55:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:33.101 20:55:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:33.101 20:55:54 -- common/autotest_common.sh@871 -- # break 00:09:33.101 20:55:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:33.101 20:55:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:33.101 20:55:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:33.101 1+0 records in 00:09:33.101 1+0 records out 00:09:33.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110522 s, 3.7 MB/s 00:09:33.101 20:55:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.101 20:55:54 -- common/autotest_common.sh@884 -- # size=4096 00:09:33.102 20:55:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.358 20:55:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:33.359 20:55:54 -- common/autotest_common.sh@887 -- # return 0 00:09:33.359 20:55:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:33.359 20:55:54 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:33.359 20:55:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:33.616 20:55:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:33.616 20:55:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:33.616 20:55:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:33.616 20:55:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:33.616 20:55:54 -- common/autotest_common.sh@867 -- # local i 00:09:33.616 20:55:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:33.616 20:55:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:33.616 20:55:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:33.616 20:55:54 -- common/autotest_common.sh@871 -- # break 00:09:33.616 20:55:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:33.616 20:55:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:33.616 20:55:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:33.616 1+0 records in 00:09:33.616 1+0 records out 00:09:33.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557099 s, 7.4 MB/s 00:09:33.616 20:55:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.616 20:55:54 -- common/autotest_common.sh@884 -- # size=4096 00:09:33.616 20:55:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.616 20:55:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:33.616 20:55:54 -- common/autotest_common.sh@887 -- # return 0 00:09:33.616 20:55:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:33.616 20:55:54 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:33.616 20:55:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:33.873 20:55:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:33.873 20:55:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:33.873 20:55:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:33.873 20:55:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:33.873 20:55:54 -- common/autotest_common.sh@867 -- # local i 00:09:33.873 20:55:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:33.873 20:55:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:33.873 20:55:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:33.873 20:55:54 -- common/autotest_common.sh@871 -- # break 00:09:33.873 20:55:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:33.873 20:55:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:33.873 20:55:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:33.873 1+0 records in 00:09:33.873 1+0 records out 00:09:33.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602116 s, 6.8 MB/s 00:09:33.873 20:55:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.873 20:55:54 -- common/autotest_common.sh@884 -- # size=4096 00:09:33.873 20:55:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.873 20:55:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:33.873 20:55:54 -- common/autotest_common.sh@887 -- # return 0 00:09:33.873 20:55:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:33.873 20:55:54 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:33.873 20:55:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:34.131 20:55:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:34.131 20:55:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:34.132 20:55:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:34.132 20:55:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:34.132 20:55:54 -- common/autotest_common.sh@867 -- # local i 00:09:34.132 20:55:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:34.132 20:55:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:34.132 20:55:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:34.132 20:55:54 -- common/autotest_common.sh@871 -- # break 00:09:34.132 20:55:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:34.132 20:55:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:34.132 20:55:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:34.132 1+0 records in 00:09:34.132 1+0 records out 00:09:34.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781448 s, 5.2 MB/s 00:09:34.132 20:55:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.132 20:55:54 -- common/autotest_common.sh@884 -- # size=4096 00:09:34.132 20:55:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.132 20:55:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:34.132 20:55:55 -- common/autotest_common.sh@887 -- # return 0 00:09:34.132 20:55:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:34.132 20:55:55 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:34.132 20:55:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:34.389 20:55:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:34.389 20:55:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:34.389 20:55:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:34.389 20:55:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:34.389 20:55:55 -- common/autotest_common.sh@867 -- # local i 00:09:34.389 20:55:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:34.389 20:55:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:34.389 20:55:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:34.389 20:55:55 -- common/autotest_common.sh@871 -- # break 00:09:34.389 20:55:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:34.389 20:55:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:34.389 20:55:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:34.389 1+0 records in 00:09:34.389 1+0 records out 00:09:34.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000922652 s, 4.4 MB/s 00:09:34.389 20:55:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.389 20:55:55 -- common/autotest_common.sh@884 -- # size=4096 00:09:34.389 20:55:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.389 20:55:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:34.389 20:55:55 -- common/autotest_common.sh@887 -- # return 0 00:09:34.389 20:55:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:34.389 20:55:55 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:34.389 20:55:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:34.647 20:55:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:34.647 20:55:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:34.647 20:55:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:34.647 20:55:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:34.647 20:55:55 -- common/autotest_common.sh@867 -- # local i 00:09:34.647 20:55:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:34.647 20:55:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:34.647 20:55:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:34.647 20:55:55 -- common/autotest_common.sh@871 -- # break 00:09:34.647 20:55:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:34.647 20:55:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:34.647 20:55:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:34.647 1+0 records in 00:09:34.647 1+0 records out 00:09:34.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927057 s, 4.4 MB/s 00:09:34.647 20:55:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.647 20:55:55 -- common/autotest_common.sh@884 -- # size=4096 00:09:34.647 20:55:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.647 20:55:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:34.647 20:55:55 -- common/autotest_common.sh@887 -- # return 0 00:09:34.647 20:55:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:34.647 20:55:55 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:34.647 20:55:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:34.905 20:55:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:34.905 20:55:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:34.905 20:55:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:34.905 20:55:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:09:34.905 20:55:55 -- common/autotest_common.sh@867 -- # local i 00:09:34.905 20:55:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:34.905 20:55:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:34.905 20:55:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:09:34.905 20:55:55 -- common/autotest_common.sh@871 -- # break 00:09:34.905 20:55:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:34.905 20:55:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:34.905 20:55:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:34.905 1+0 records in 00:09:34.905 1+0 records out 00:09:34.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719532 s, 5.7 MB/s 00:09:34.905 20:55:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.905 20:55:55 -- common/autotest_common.sh@884 -- # size=4096 00:09:34.905 20:55:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.905 20:55:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:34.905 20:55:55 -- common/autotest_common.sh@887 -- # return 0 00:09:34.905 20:55:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:34.905 20:55:55 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:34.905 20:55:55 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd0", 00:09:35.164 "bdev_name": "Nvme0n1p1" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd1", 00:09:35.164 "bdev_name": "Nvme0n1p2" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd2", 00:09:35.164 "bdev_name": "Nvme1n1" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd3", 00:09:35.164 "bdev_name": "Nvme2n1" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd4", 00:09:35.164 "bdev_name": "Nvme2n2" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd5", 00:09:35.164 "bdev_name": "Nvme2n3" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd6", 00:09:35.164 "bdev_name": "Nvme3n1" 00:09:35.164 } 00:09:35.164 ]' 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd0", 00:09:35.164 "bdev_name": "Nvme0n1p1" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd1", 00:09:35.164 "bdev_name": "Nvme0n1p2" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd2", 00:09:35.164 "bdev_name": "Nvme1n1" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd3", 00:09:35.164 "bdev_name": "Nvme2n1" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd4", 00:09:35.164 "bdev_name": "Nvme2n2" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd5", 00:09:35.164 "bdev_name": "Nvme2n3" 00:09:35.164 }, 00:09:35.164 { 00:09:35.164 "nbd_device": "/dev/nbd6", 00:09:35.164 "bdev_name": "Nvme3n1" 00:09:35.164 } 00:09:35.164 ]' 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@51 -- # local i 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.164 20:55:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:35.422 20:55:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:35.422 20:55:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:35.422 20:55:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:35.422 20:55:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.422 20:55:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.422 20:55:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:35.422 20:55:56 -- bdev/nbd_common.sh@41 -- # break 00:09:35.422 20:55:56 -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.422 20:55:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.422 20:55:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:35.679 20:55:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:35.680 20:55:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:35.680 20:55:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:35.680 20:55:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.680 20:55:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.680 20:55:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:35.680 20:55:56 -- bdev/nbd_common.sh@41 -- # break 00:09:35.680 20:55:56 -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.680 20:55:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.680 20:55:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:35.937 20:55:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:35.937 20:55:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:35.937 20:55:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:35.937 20:55:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.937 20:55:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.937 20:55:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:35.937 20:55:56 -- bdev/nbd_common.sh@41 -- # break 00:09:35.937 20:55:56 -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.937 20:55:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.937 20:55:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:36.196 20:55:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:36.196 20:55:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:36.196 20:55:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:36.196 20:55:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.196 20:55:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.196 20:55:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:36.196 20:55:57 -- bdev/nbd_common.sh@41 -- # break 00:09:36.196 20:55:57 -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.196 20:55:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.196 20:55:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:36.455 20:55:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:36.455 20:55:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:36.455 20:55:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:36.455 20:55:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.455 20:55:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.455 20:55:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:36.455 20:55:57 -- bdev/nbd_common.sh@41 -- # break 00:09:36.455 20:55:57 -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.455 20:55:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.455 20:55:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:36.713 20:55:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:36.713 20:55:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:36.713 20:55:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:36.713 20:55:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.713 20:55:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.713 20:55:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:36.713 20:55:57 -- bdev/nbd_common.sh@41 -- # break 00:09:36.713 20:55:57 -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.713 20:55:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.713 20:55:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@41 -- # break 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:36.972 20:55:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@65 -- # true 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@65 -- # count=0 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@122 -- # count=0 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@127 -- # return 0 00:09:37.232 20:55:58 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@12 -- # local i 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:37.232 /dev/nbd0 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:37.232 20:55:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:37.232 20:55:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:37.232 20:55:58 -- common/autotest_common.sh@867 -- # local i 00:09:37.232 20:55:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:37.232 20:55:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:37.232 20:55:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:37.491 20:55:58 -- common/autotest_common.sh@871 -- # break 00:09:37.491 20:55:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:37.491 20:55:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:37.491 20:55:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.491 1+0 records in 00:09:37.491 1+0 records out 00:09:37.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569463 s, 7.2 MB/s 00:09:37.491 20:55:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.491 20:55:58 -- common/autotest_common.sh@884 -- # size=4096 00:09:37.491 20:55:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.491 20:55:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:37.491 20:55:58 -- common/autotest_common.sh@887 -- # return 0 00:09:37.491 20:55:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:37.491 20:55:58 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:37.491 20:55:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:37.491 /dev/nbd1 00:09:37.491 20:55:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:37.491 20:55:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:37.491 20:55:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:37.491 20:55:58 -- common/autotest_common.sh@867 -- # local i 00:09:37.491 20:55:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:37.491 20:55:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:37.491 20:55:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:37.491 20:55:58 -- common/autotest_common.sh@871 -- # break 00:09:37.491 20:55:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:37.491 20:55:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:37.491 20:55:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.491 1+0 records in 00:09:37.491 1+0 records out 00:09:37.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520356 s, 7.9 MB/s 00:09:37.491 20:55:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.491 20:55:58 -- common/autotest_common.sh@884 -- # size=4096 00:09:37.491 20:55:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.491 20:55:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:37.491 20:55:58 -- common/autotest_common.sh@887 -- # return 0 00:09:37.491 20:55:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:37.491 20:55:58 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:37.491 20:55:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:37.750 /dev/nbd10 00:09:37.750 20:55:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:37.750 20:55:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:37.750 20:55:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:37.750 20:55:58 -- common/autotest_common.sh@867 -- # local i 00:09:37.750 20:55:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:37.750 20:55:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:37.750 20:55:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:37.750 20:55:58 -- common/autotest_common.sh@871 -- # break 00:09:37.750 20:55:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:37.750 20:55:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:37.750 20:55:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.750 1+0 records in 00:09:37.750 1+0 records out 00:09:37.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058937 s, 6.9 MB/s 00:09:37.750 20:55:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.750 20:55:58 -- common/autotest_common.sh@884 -- # size=4096 00:09:37.750 20:55:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.750 20:55:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:37.750 20:55:58 -- common/autotest_common.sh@887 -- # return 0 00:09:37.750 20:55:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:37.750 20:55:58 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:37.750 20:55:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:38.009 /dev/nbd11 00:09:38.009 20:55:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:38.009 20:55:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:38.009 20:55:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:38.009 20:55:58 -- common/autotest_common.sh@867 -- # local i 00:09:38.009 20:55:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:38.009 20:55:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:38.009 20:55:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:38.009 20:55:58 -- common/autotest_common.sh@871 -- # break 00:09:38.009 20:55:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:38.009 20:55:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:38.009 20:55:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:38.009 1+0 records in 00:09:38.009 1+0 records out 00:09:38.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702074 s, 5.8 MB/s 00:09:38.009 20:55:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.009 20:55:58 -- common/autotest_common.sh@884 -- # size=4096 00:09:38.009 20:55:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.009 20:55:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:38.009 20:55:58 -- common/autotest_common.sh@887 -- # return 0 00:09:38.009 20:55:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:38.009 20:55:58 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:38.009 20:55:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:38.268 /dev/nbd12 00:09:38.268 20:55:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:38.268 20:55:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:38.268 20:55:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:38.268 20:55:59 -- common/autotest_common.sh@867 -- # local i 00:09:38.268 20:55:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:38.268 20:55:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:38.268 20:55:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:38.268 20:55:59 -- common/autotest_common.sh@871 -- # break 00:09:38.268 20:55:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:38.268 20:55:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:38.268 20:55:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:38.268 1+0 records in 00:09:38.268 1+0 records out 00:09:38.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772082 s, 5.3 MB/s 00:09:38.268 20:55:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.268 20:55:59 -- common/autotest_common.sh@884 -- # size=4096 00:09:38.268 20:55:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.268 20:55:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:38.268 20:55:59 -- common/autotest_common.sh@887 -- # return 0 00:09:38.268 20:55:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:38.268 20:55:59 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:38.268 20:55:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:38.527 /dev/nbd13 00:09:38.527 20:55:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:38.527 20:55:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:38.527 20:55:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:38.527 20:55:59 -- common/autotest_common.sh@867 -- # local i 00:09:38.527 20:55:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:38.527 20:55:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:38.527 20:55:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:38.527 20:55:59 -- common/autotest_common.sh@871 -- # break 00:09:38.527 20:55:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:38.527 20:55:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:38.527 20:55:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:38.527 1+0 records in 00:09:38.527 1+0 records out 00:09:38.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799868 s, 5.1 MB/s 00:09:38.527 20:55:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.527 20:55:59 -- common/autotest_common.sh@884 -- # size=4096 00:09:38.527 20:55:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.527 20:55:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:38.527 20:55:59 -- common/autotest_common.sh@887 -- # return 0 00:09:38.527 20:55:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:38.527 20:55:59 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:38.527 20:55:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:38.785 /dev/nbd14 00:09:38.785 20:55:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:38.785 20:55:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:38.785 20:55:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:38.785 20:55:59 -- common/autotest_common.sh@867 -- # local i 00:09:38.785 20:55:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:38.785 20:55:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:38.785 20:55:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:38.785 20:55:59 -- common/autotest_common.sh@871 -- # break 00:09:38.785 20:55:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:38.785 20:55:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:38.785 20:55:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:38.785 1+0 records in 00:09:38.785 1+0 records out 00:09:38.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011068 s, 3.7 MB/s 00:09:38.785 20:55:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.785 20:55:59 -- common/autotest_common.sh@884 -- # size=4096 00:09:38.785 20:55:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.785 20:55:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:38.785 20:55:59 -- common/autotest_common.sh@887 -- # return 0 00:09:38.785 20:55:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:38.785 20:55:59 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:38.785 20:55:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:38.785 20:55:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.786 20:55:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:39.044 20:56:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd0", 00:09:39.044 "bdev_name": "Nvme0n1p1" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd1", 00:09:39.044 "bdev_name": "Nvme0n1p2" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd10", 00:09:39.044 "bdev_name": "Nvme1n1" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd11", 00:09:39.044 "bdev_name": "Nvme2n1" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd12", 00:09:39.044 "bdev_name": "Nvme2n2" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd13", 00:09:39.044 "bdev_name": "Nvme2n3" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd14", 00:09:39.044 "bdev_name": "Nvme3n1" 00:09:39.044 } 00:09:39.044 ]' 00:09:39.044 20:56:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd0", 00:09:39.044 "bdev_name": "Nvme0n1p1" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd1", 00:09:39.044 "bdev_name": "Nvme0n1p2" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd10", 00:09:39.044 "bdev_name": "Nvme1n1" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd11", 00:09:39.044 "bdev_name": "Nvme2n1" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd12", 00:09:39.044 "bdev_name": "Nvme2n2" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd13", 00:09:39.044 "bdev_name": "Nvme2n3" 00:09:39.044 }, 00:09:39.044 { 00:09:39.044 "nbd_device": "/dev/nbd14", 00:09:39.044 "bdev_name": "Nvme3n1" 00:09:39.044 } 00:09:39.044 ]' 00:09:39.044 20:56:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:39.303 /dev/nbd1 00:09:39.303 /dev/nbd10 00:09:39.303 /dev/nbd11 00:09:39.303 /dev/nbd12 00:09:39.303 /dev/nbd13 00:09:39.303 /dev/nbd14' 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:39.303 /dev/nbd1 00:09:39.303 /dev/nbd10 00:09:39.303 /dev/nbd11 00:09:39.303 /dev/nbd12 00:09:39.303 /dev/nbd13 00:09:39.303 /dev/nbd14' 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@65 -- # count=7 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@66 -- # echo 7 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@95 -- # count=7 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:39.303 256+0 records in 00:09:39.303 256+0 records out 00:09:39.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692736 s, 151 MB/s 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:39.303 256+0 records in 00:09:39.303 256+0 records out 00:09:39.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157384 s, 6.7 MB/s 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:39.303 20:56:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:39.562 256+0 records in 00:09:39.562 256+0 records out 00:09:39.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193174 s, 5.4 MB/s 00:09:39.562 20:56:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:39.562 20:56:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:39.821 256+0 records in 00:09:39.821 256+0 records out 00:09:39.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184125 s, 5.7 MB/s 00:09:39.821 20:56:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:39.821 20:56:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:39.821 256+0 records in 00:09:39.821 256+0 records out 00:09:39.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188431 s, 5.6 MB/s 00:09:39.821 20:56:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:39.821 20:56:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:40.080 256+0 records in 00:09:40.080 256+0 records out 00:09:40.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182214 s, 5.8 MB/s 00:09:40.080 20:56:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:40.080 20:56:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:40.339 256+0 records in 00:09:40.339 256+0 records out 00:09:40.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189629 s, 5.5 MB/s 00:09:40.339 20:56:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:40.339 20:56:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:40.598 256+0 records in 00:09:40.598 256+0 records out 00:09:40.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186113 s, 5.6 MB/s 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@51 -- # local i 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.598 20:56:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:40.856 20:56:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:40.856 20:56:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:40.856 20:56:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:40.856 20:56:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.856 20:56:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.856 20:56:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:40.856 20:56:01 -- bdev/nbd_common.sh@41 -- # break 00:09:40.856 20:56:01 -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.856 20:56:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.856 20:56:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:41.115 20:56:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:41.115 20:56:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:41.115 20:56:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:41.115 20:56:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.115 20:56:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.115 20:56:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:41.115 20:56:02 -- bdev/nbd_common.sh@41 -- # break 00:09:41.115 20:56:02 -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.115 20:56:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.115 20:56:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:41.374 20:56:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:41.374 20:56:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:41.374 20:56:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:41.374 20:56:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.374 20:56:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.374 20:56:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:41.374 20:56:02 -- bdev/nbd_common.sh@41 -- # break 00:09:41.374 20:56:02 -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.374 20:56:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.374 20:56:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:41.633 20:56:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:41.633 20:56:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:41.633 20:56:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:41.634 20:56:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.634 20:56:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.634 20:56:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:41.634 20:56:02 -- bdev/nbd_common.sh@41 -- # break 00:09:41.634 20:56:02 -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.634 20:56:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.634 20:56:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:41.893 20:56:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:41.893 20:56:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:41.893 20:56:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:41.893 20:56:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.893 20:56:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.893 20:56:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:41.893 20:56:02 -- bdev/nbd_common.sh@41 -- # break 00:09:41.893 20:56:02 -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.893 20:56:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.893 20:56:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:42.155 20:56:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:42.155 20:56:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:42.155 20:56:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:42.155 20:56:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:42.155 20:56:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:42.155 20:56:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:42.155 20:56:03 -- bdev/nbd_common.sh@41 -- # break 00:09:42.155 20:56:03 -- bdev/nbd_common.sh@45 -- # return 0 00:09:42.155 20:56:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:42.155 20:56:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@41 -- # break 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@45 -- # return 0 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:42.413 20:56:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@65 -- # true 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@65 -- # count=0 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@104 -- # count=0 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@109 -- # return 0 00:09:42.671 20:56:03 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:42.671 20:56:03 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:42.930 malloc_lvol_verify 00:09:42.930 20:56:03 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:42.930 032b6ea3-ff86-42f1-a700-1e5763b687cc 00:09:43.189 20:56:03 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:43.189 d8730133-0a70-4590-9b38-144411e132df 00:09:43.189 20:56:04 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:43.447 /dev/nbd0 00:09:43.447 20:56:04 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:43.447 mke2fs 1.47.0 (5-Feb-2023) 00:09:43.447 Discarding device blocks: 0/4096 done 00:09:43.447 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:43.447 00:09:43.447 Allocating group tables: 0/1 done 00:09:43.447 Writing inode tables: 0/1 done 00:09:43.447 Creating journal (1024 blocks): done 00:09:43.447 Writing superblocks and filesystem accounting information: 0/1 done 00:09:43.447 00:09:43.447 20:56:04 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:43.447 20:56:04 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:43.448 20:56:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.448 20:56:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:43.448 20:56:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:43.448 20:56:04 -- bdev/nbd_common.sh@51 -- # local i 00:09:43.448 20:56:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.448 20:56:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:43.706 20:56:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:43.706 20:56:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:43.706 20:56:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:43.706 20:56:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.706 20:56:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.706 20:56:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:43.706 20:56:04 -- bdev/nbd_common.sh@41 -- # break 00:09:43.706 20:56:04 -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.706 20:56:04 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:43.707 20:56:04 -- bdev/nbd_common.sh@147 -- # return 0 00:09:43.707 20:56:04 -- bdev/blockdev.sh@324 -- # killprocess 62572 00:09:43.707 20:56:04 -- common/autotest_common.sh@936 -- # '[' -z 62572 ']' 00:09:43.707 20:56:04 -- common/autotest_common.sh@940 -- # kill -0 62572 00:09:43.707 20:56:04 -- common/autotest_common.sh@941 -- # uname 00:09:43.707 20:56:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:43.707 20:56:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62572 00:09:43.707 20:56:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:43.707 20:56:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:43.707 20:56:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62572' 00:09:43.707 killing process with pid 62572 00:09:43.707 20:56:04 -- common/autotest_common.sh@955 -- # kill 62572 00:09:43.707 20:56:04 -- common/autotest_common.sh@960 -- # wait 62572 00:09:44.642 20:56:05 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:44.642 00:09:44.642 real 0m12.725s 00:09:44.642 user 0m18.069s 00:09:44.642 sys 0m4.058s 00:09:44.642 20:56:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:44.642 ************************************ 00:09:44.642 END TEST bdev_nbd 00:09:44.642 ************************************ 00:09:44.642 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:09:44.642 20:56:05 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:44.642 20:56:05 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:09:44.642 20:56:05 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:09:44.642 skipping fio tests on NVMe due to multi-ns failures. 00:09:44.642 20:56:05 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:44.642 20:56:05 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:44.642 20:56:05 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:44.642 20:56:05 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:44.642 20:56:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.642 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:09:44.642 ************************************ 00:09:44.642 START TEST bdev_verify 00:09:44.642 ************************************ 00:09:44.642 20:56:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:44.642 [2024-12-08 20:56:05.674150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:44.642 [2024-12-08 20:56:05.674284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62998 ] 00:09:44.900 [2024-12-08 20:56:05.827430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.158 [2024-12-08 20:56:05.973837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.158 [2024-12-08 20:56:05.973867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.725 Running I/O for 5 seconds... 00:09:50.998 00:09:50.998 Latency(us) 00:09:50.998 [2024-12-08T20:56:12.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.998 [2024-12-08T20:56:12.041Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:50.998 Verification LBA range: start 0x0 length 0x5e800 00:09:50.998 Nvme0n1p1 : 5.05 2297.22 8.97 0.00 0.00 55547.84 8102.63 55526.87 00:09:50.998 [2024-12-08T20:56:12.041Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:50.998 Verification LBA range: start 0x5e800 length 0x5e800 00:09:50.998 Nvme0n1p1 : 5.06 2324.64 9.08 0.00 0.00 54682.29 3768.32 48854.11 00:09:50.998 [2024-12-08T20:56:12.041Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:50.998 Verification LBA range: start 0x0 length 0x5e7ff 00:09:50.998 Nvme0n1p2 : 5.05 2301.48 8.99 0.00 0.00 55396.11 4796.04 51475.55 00:09:50.998 [2024-12-08T20:56:12.042Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:50.999 Nvme0n1p2 : 5.06 2323.60 9.08 0.00 0.00 54645.60 4944.99 49330.73 00:09:50.999 [2024-12-08T20:56:12.042Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0x0 length 0xa0000 00:09:50.999 Nvme1n1 : 5.05 2300.84 8.99 0.00 0.00 55356.71 5302.46 51475.55 00:09:50.999 [2024-12-08T20:56:12.042Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0xa0000 length 0xa0000 00:09:50.999 Nvme1n1 : 5.05 2321.32 9.07 0.00 0.00 54980.03 8043.05 53858.68 00:09:50.999 [2024-12-08T20:56:12.042Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0x0 length 0x80000 00:09:50.999 Nvme2n1 : 5.06 2300.21 8.99 0.00 0.00 55321.28 5838.66 52428.80 00:09:50.999 [2024-12-08T20:56:12.042Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0x80000 length 0x80000 00:09:50.999 Nvme2n1 : 5.05 2320.57 9.06 0.00 0.00 54952.58 7983.48 52190.49 00:09:50.999 [2024-12-08T20:56:12.042Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0x0 length 0x80000 00:09:50.999 Nvme2n2 : 5.06 2299.15 8.98 0.00 0.00 55288.96 7238.75 53143.74 00:09:50.999 [2024-12-08T20:56:12.042Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0x80000 length 0x80000 00:09:50.999 Nvme2n2 : 5.05 2319.81 9.06 0.00 0.00 54916.29 8400.52 51237.24 00:09:50.999 [2024-12-08T20:56:12.042Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0x0 length 0x80000 00:09:50.999 Nvme2n3 : 5.06 2298.20 8.98 0.00 0.00 55260.13 8579.26 53858.68 00:09:50.999 [2024-12-08T20:56:12.042Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0x80000 length 0x80000 00:09:50.999 Nvme2n3 : 5.05 2319.18 9.06 0.00 0.00 54858.89 8698.41 48854.11 00:09:50.999 [2024-12-08T20:56:12.042Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0x0 length 0x20000 00:09:50.999 Nvme3n1 : 5.06 2297.30 8.97 0.00 0.00 55218.93 9651.67 53382.05 00:09:50.999 [2024-12-08T20:56:12.042Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:50.999 Verification LBA range: start 0x20000 length 0x20000 00:09:50.999 Nvme3n1 : 5.06 2325.71 9.08 0.00 0.00 54717.51 2561.86 47900.86 00:09:50.999 [2024-12-08T20:56:12.042Z] =================================================================================================================== 00:09:50.999 [2024-12-08T20:56:12.042Z] Total : 32349.23 126.36 0.00 0.00 55080.26 2561.86 55526.87 00:09:52.903 00:09:52.903 real 0m7.934s 00:09:52.903 user 0m14.784s 00:09:52.903 sys 0m0.203s 00:09:52.903 20:56:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.903 ************************************ 00:09:52.903 END TEST bdev_verify 00:09:52.903 ************************************ 00:09:52.903 20:56:13 -- common/autotest_common.sh@10 -- # set +x 00:09:52.903 20:56:13 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:52.903 20:56:13 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:52.903 20:56:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.903 20:56:13 -- common/autotest_common.sh@10 -- # set +x 00:09:52.903 ************************************ 00:09:52.903 START TEST bdev_verify_big_io 00:09:52.903 ************************************ 00:09:52.903 20:56:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:52.903 [2024-12-08 20:56:13.681978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:52.903 [2024-12-08 20:56:13.682166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63107 ] 00:09:52.903 [2024-12-08 20:56:13.851885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.162 [2024-12-08 20:56:14.006983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.162 [2024-12-08 20:56:14.007000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.730 Running I/O for 5 seconds... 00:10:00.325 00:10:00.325 Latency(us) 00:10:00.325 [2024-12-08T20:56:21.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.325 [2024-12-08T20:56:21.368Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:00.325 Verification LBA range: start 0x0 length 0x5e80 00:10:00.325 Nvme0n1p1 : 5.38 250.88 15.68 0.00 0.00 501356.36 46232.67 690153.66 00:10:00.325 [2024-12-08T20:56:21.368Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:00.325 Verification LBA range: start 0x5e80 length 0x5e80 00:10:00.325 Nvme0n1p1 : 5.41 289.48 18.09 0.00 0.00 408210.47 3515.11 537633.51 00:10:00.325 [2024-12-08T20:56:21.368Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:00.325 Verification LBA range: start 0x0 length 0x5e7f 00:10:00.325 Nvme0n1p2 : 5.38 250.75 15.67 0.00 0.00 495364.10 47424.23 644397.61 00:10:00.325 [2024-12-08T20:56:21.368Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:00.325 Verification LBA range: start 0x5e7f length 0x5e7f 00:10:00.325 Nvme0n1p2 : 5.35 259.94 16.25 0.00 0.00 484335.75 41466.41 659649.63 00:10:00.325 [2024-12-08T20:56:21.368Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:00.325 Verification LBA range: start 0x0 length 0xa000 00:10:00.325 Nvme1n1 : 5.41 257.16 16.07 0.00 0.00 479417.71 25261.15 591015.56 00:10:00.325 [2024-12-08T20:56:21.368Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:00.325 Verification LBA range: start 0xa000 length 0xa000 00:10:00.325 Nvme1n1 : 5.35 259.84 16.24 0.00 0.00 479052.27 41943.04 617706.59 00:10:00.325 [2024-12-08T20:56:21.369Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:00.326 Verification LBA range: start 0x0 length 0x8000 00:10:00.326 Nvme2n1 : 5.41 257.04 16.06 0.00 0.00 473632.75 25499.46 549072.52 00:10:00.326 [2024-12-08T20:56:21.369Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:00.326 Verification LBA range: start 0x8000 length 0x8000 00:10:00.326 Nvme2n1 : 5.36 259.74 16.23 0.00 0.00 473556.69 42419.67 571950.55 00:10:00.326 [2024-12-08T20:56:21.369Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:00.326 Verification LBA range: start 0x0 length 0x8000 00:10:00.326 Nvme2n2 : 5.41 256.93 16.06 0.00 0.00 467392.25 26095.24 545259.52 00:10:00.326 [2024-12-08T20:56:21.369Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:00.326 Verification LBA range: start 0x8000 length 0x8000 00:10:00.326 Nvme2n2 : 5.38 265.75 16.61 0.00 0.00 459270.96 22878.02 526194.50 00:10:00.326 [2024-12-08T20:56:21.369Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:00.326 Verification LBA range: start 0x0 length 0x8000 00:10:00.326 Nvme2n3 : 5.43 263.42 16.46 0.00 0.00 451163.45 11498.59 876990.84 00:10:00.326 [2024-12-08T20:56:21.369Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:00.326 Verification LBA range: start 0x8000 length 0x8000 00:10:00.326 Nvme2n3 : 5.38 265.65 16.60 0.00 0.00 453385.55 23354.65 495690.47 00:10:00.326 [2024-12-08T20:56:21.369Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:00.326 Verification LBA range: start 0x0 length 0x2000 00:10:00.326 Nvme3n1 : 5.44 280.09 17.51 0.00 0.00 420195.86 6047.19 899868.86 00:10:00.326 [2024-12-08T20:56:21.369Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:00.326 Verification LBA range: start 0x2000 length 0x2000 00:10:00.326 Nvme3n1 : 5.40 273.59 17.10 0.00 0.00 435862.09 15847.80 491877.47 00:10:00.326 [2024-12-08T20:56:21.369Z] =================================================================================================================== 00:10:00.326 [2024-12-08T20:56:21.369Z] Total : 3690.24 230.64 0.00 0.00 461925.98 3515.11 899868.86 00:10:00.600 00:10:00.600 real 0m8.006s 00:10:00.600 user 0m14.815s 00:10:00.600 sys 0m0.273s 00:10:00.600 20:56:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:00.600 20:56:21 -- common/autotest_common.sh@10 -- # set +x 00:10:00.600 ************************************ 00:10:00.600 END TEST bdev_verify_big_io 00:10:00.600 ************************************ 00:10:00.865 20:56:21 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:00.865 20:56:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:00.865 20:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:00.865 20:56:21 -- common/autotest_common.sh@10 -- # set +x 00:10:00.865 ************************************ 00:10:00.865 START TEST bdev_write_zeroes 00:10:00.865 ************************************ 00:10:00.865 20:56:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:00.865 [2024-12-08 20:56:21.740052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:00.865 [2024-12-08 20:56:21.740231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63211 ] 00:10:00.865 [2024-12-08 20:56:21.907722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.124 [2024-12-08 20:56:22.055929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.691 Running I/O for 1 seconds... 00:10:02.625 00:10:02.625 Latency(us) 00:10:02.625 [2024-12-08T20:56:23.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.625 [2024-12-08T20:56:23.668Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.625 Nvme0n1p1 : 1.02 7140.89 27.89 0.00 0.00 17848.16 13941.29 29550.78 00:10:02.625 [2024-12-08T20:56:23.668Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.625 Nvme0n1p2 : 1.02 7129.43 27.85 0.00 0.00 17841.81 14239.19 29908.25 00:10:02.625 [2024-12-08T20:56:23.668Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.625 Nvme1n1 : 1.02 7118.76 27.81 0.00 0.00 17806.25 14596.65 28120.90 00:10:02.625 [2024-12-08T20:56:23.668Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.625 Nvme2n1 : 1.03 7157.26 27.96 0.00 0.00 17656.60 10843.23 22639.71 00:10:02.625 [2024-12-08T20:56:23.668Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.625 Nvme2n2 : 1.03 7146.46 27.92 0.00 0.00 17656.29 11200.70 22043.93 00:10:02.625 [2024-12-08T20:56:23.668Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.625 Nvme2n3 : 1.03 7135.93 27.87 0.00 0.00 17651.72 11558.17 22401.40 00:10:02.625 [2024-12-08T20:56:23.668Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.625 Nvme3n1 : 1.03 7125.31 27.83 0.00 0.00 17580.33 8400.52 20256.58 00:10:02.625 [2024-12-08T20:56:23.668Z] =================================================================================================================== 00:10:02.625 [2024-12-08T20:56:23.668Z] Total : 49954.03 195.13 0.00 0.00 17719.75 8400.52 29908.25 00:10:03.997 00:10:03.997 real 0m3.014s 00:10:03.997 user 0m2.675s 00:10:03.997 sys 0m0.219s 00:10:03.997 20:56:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:03.997 ************************************ 00:10:03.997 END TEST bdev_write_zeroes 00:10:03.997 ************************************ 00:10:03.997 20:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:03.997 20:56:24 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:03.997 20:56:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:03.997 20:56:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:03.997 20:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:03.997 ************************************ 00:10:03.997 START TEST bdev_json_nonenclosed 00:10:03.997 ************************************ 00:10:03.997 20:56:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:03.997 [2024-12-08 20:56:24.808418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:03.997 [2024-12-08 20:56:24.808601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63265 ] 00:10:03.997 [2024-12-08 20:56:24.978632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.254 [2024-12-08 20:56:25.127128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.254 [2024-12-08 20:56:25.127319] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:04.254 [2024-12-08 20:56:25.127349] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:04.511 00:10:04.511 real 0m0.716s 00:10:04.511 user 0m0.490s 00:10:04.511 sys 0m0.120s 00:10:04.511 20:56:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.511 20:56:25 -- common/autotest_common.sh@10 -- # set +x 00:10:04.511 ************************************ 00:10:04.511 END TEST bdev_json_nonenclosed 00:10:04.511 ************************************ 00:10:04.511 20:56:25 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:04.511 20:56:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:04.511 20:56:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.511 20:56:25 -- common/autotest_common.sh@10 -- # set +x 00:10:04.511 ************************************ 00:10:04.511 START TEST bdev_json_nonarray 00:10:04.511 ************************************ 00:10:04.511 20:56:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:04.770 [2024-12-08 20:56:25.573435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.770 [2024-12-08 20:56:25.573594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63296 ] 00:10:04.770 [2024-12-08 20:56:25.742499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.028 [2024-12-08 20:56:25.886681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.028 [2024-12-08 20:56:25.886847] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:05.028 [2024-12-08 20:56:25.886871] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:05.287 00:10:05.287 real 0m0.708s 00:10:05.287 user 0m0.489s 00:10:05.287 sys 0m0.114s 00:10:05.287 20:56:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:05.287 ************************************ 00:10:05.287 END TEST bdev_json_nonarray 00:10:05.287 ************************************ 00:10:05.287 20:56:26 -- common/autotest_common.sh@10 -- # set +x 00:10:05.287 20:56:26 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:10:05.287 20:56:26 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:10:05.287 20:56:26 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:05.287 20:56:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:05.287 20:56:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.287 20:56:26 -- common/autotest_common.sh@10 -- # set +x 00:10:05.287 ************************************ 00:10:05.287 START TEST bdev_gpt_uuid 00:10:05.287 ************************************ 00:10:05.287 20:56:26 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:10:05.287 20:56:26 -- bdev/blockdev.sh@612 -- # local bdev 00:10:05.287 20:56:26 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:10:05.287 20:56:26 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=63322 00:10:05.287 20:56:26 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:05.287 20:56:26 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:05.287 20:56:26 -- bdev/blockdev.sh@47 -- # waitforlisten 63322 00:10:05.287 20:56:26 -- common/autotest_common.sh@829 -- # '[' -z 63322 ']' 00:10:05.287 20:56:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.287 20:56:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.287 20:56:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.287 20:56:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.287 20:56:26 -- common/autotest_common.sh@10 -- # set +x 00:10:05.565 [2024-12-08 20:56:26.350688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:05.565 [2024-12-08 20:56:26.350854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63322 ] 00:10:05.565 [2024-12-08 20:56:26.520146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.824 [2024-12-08 20:56:26.664013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:05.824 [2024-12-08 20:56:26.664237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.391 20:56:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.391 20:56:27 -- common/autotest_common.sh@862 -- # return 0 00:10:06.391 20:56:27 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:06.391 20:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.391 20:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 Some configs were skipped because the RPC state that can call them passed over. 00:10:06.650 20:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.650 20:56:27 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:10:06.650 20:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.650 20:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 20:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.650 20:56:27 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:06.650 20:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.650 20:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 20:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.650 20:56:27 -- bdev/blockdev.sh@619 -- # bdev='[ 00:10:06.650 { 00:10:06.650 "name": "Nvme0n1p1", 00:10:06.650 "aliases": [ 00:10:06.650 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:06.650 ], 00:10:06.650 "product_name": "GPT Disk", 00:10:06.650 "block_size": 4096, 00:10:06.650 "num_blocks": 774144, 00:10:06.650 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:06.650 "md_size": 64, 00:10:06.650 "md_interleave": false, 00:10:06.650 "dif_type": 0, 00:10:06.650 "assigned_rate_limits": { 00:10:06.650 "rw_ios_per_sec": 0, 00:10:06.650 "rw_mbytes_per_sec": 0, 00:10:06.650 "r_mbytes_per_sec": 0, 00:10:06.650 "w_mbytes_per_sec": 0 00:10:06.650 }, 00:10:06.650 "claimed": false, 00:10:06.650 "zoned": false, 00:10:06.650 "supported_io_types": { 00:10:06.650 "read": true, 00:10:06.650 "write": true, 00:10:06.650 "unmap": true, 00:10:06.650 "write_zeroes": true, 00:10:06.650 "flush": true, 00:10:06.650 "reset": true, 00:10:06.650 "compare": true, 00:10:06.650 "compare_and_write": false, 00:10:06.650 "abort": true, 00:10:06.650 "nvme_admin": false, 00:10:06.650 "nvme_io": false 00:10:06.650 }, 00:10:06.650 "driver_specific": { 00:10:06.650 "gpt": { 00:10:06.650 "base_bdev": "Nvme0n1", 00:10:06.650 "offset_blocks": 256, 00:10:06.650 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:06.650 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:06.650 "partition_name": "SPDK_TEST_first" 00:10:06.650 } 00:10:06.650 } 00:10:06.650 } 00:10:06.650 ]' 00:10:06.650 20:56:27 -- bdev/blockdev.sh@620 -- # jq -r length 00:10:06.650 20:56:27 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:10:06.650 20:56:27 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:10:06.910 20:56:27 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:06.910 20:56:27 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:06.910 20:56:27 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:06.910 20:56:27 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:06.910 20:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.910 20:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:06.910 20:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.910 20:56:27 -- bdev/blockdev.sh@624 -- # bdev='[ 00:10:06.910 { 00:10:06.910 "name": "Nvme0n1p2", 00:10:06.910 "aliases": [ 00:10:06.910 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:06.910 ], 00:10:06.910 "product_name": "GPT Disk", 00:10:06.910 "block_size": 4096, 00:10:06.910 "num_blocks": 774143, 00:10:06.910 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:06.910 "md_size": 64, 00:10:06.910 "md_interleave": false, 00:10:06.910 "dif_type": 0, 00:10:06.910 "assigned_rate_limits": { 00:10:06.910 "rw_ios_per_sec": 0, 00:10:06.910 "rw_mbytes_per_sec": 0, 00:10:06.910 "r_mbytes_per_sec": 0, 00:10:06.910 "w_mbytes_per_sec": 0 00:10:06.910 }, 00:10:06.910 "claimed": false, 00:10:06.910 "zoned": false, 00:10:06.910 "supported_io_types": { 00:10:06.910 "read": true, 00:10:06.910 "write": true, 00:10:06.910 "unmap": true, 00:10:06.910 "write_zeroes": true, 00:10:06.910 "flush": true, 00:10:06.910 "reset": true, 00:10:06.910 "compare": true, 00:10:06.910 "compare_and_write": false, 00:10:06.910 "abort": true, 00:10:06.910 "nvme_admin": false, 00:10:06.910 "nvme_io": false 00:10:06.910 }, 00:10:06.910 "driver_specific": { 00:10:06.910 "gpt": { 00:10:06.910 "base_bdev": "Nvme0n1", 00:10:06.910 "offset_blocks": 774400, 00:10:06.910 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:06.910 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:06.910 "partition_name": "SPDK_TEST_second" 00:10:06.910 } 00:10:06.910 } 00:10:06.910 } 00:10:06.910 ]' 00:10:06.910 20:56:27 -- bdev/blockdev.sh@625 -- # jq -r length 00:10:06.910 20:56:27 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:10:06.910 20:56:27 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:10:06.910 20:56:27 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:06.910 20:56:27 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:06.910 20:56:27 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:06.910 20:56:27 -- bdev/blockdev.sh@629 -- # killprocess 63322 00:10:06.910 20:56:27 -- common/autotest_common.sh@936 -- # '[' -z 63322 ']' 00:10:06.910 20:56:27 -- common/autotest_common.sh@940 -- # kill -0 63322 00:10:06.910 20:56:27 -- common/autotest_common.sh@941 -- # uname 00:10:06.910 20:56:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:06.910 20:56:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63322 00:10:07.169 20:56:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:07.169 killing process with pid 63322 00:10:07.169 20:56:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:07.169 20:56:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63322' 00:10:07.169 20:56:27 -- common/autotest_common.sh@955 -- # kill 63322 00:10:07.169 20:56:27 -- common/autotest_common.sh@960 -- # wait 63322 00:10:08.549 00:10:08.549 real 0m3.317s 00:10:08.549 user 0m3.658s 00:10:08.549 sys 0m0.413s 00:10:08.549 ************************************ 00:10:08.549 END TEST bdev_gpt_uuid 00:10:08.550 ************************************ 00:10:08.550 20:56:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:08.550 20:56:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.809 20:56:29 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:10:08.809 20:56:29 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:10:08.809 20:56:29 -- bdev/blockdev.sh@809 -- # cleanup 00:10:08.809 20:56:29 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:08.809 20:56:29 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:08.809 20:56:29 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:10:08.809 20:56:29 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:10:08.809 20:56:29 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:10:08.809 20:56:29 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:09.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.327 Waiting for block devices as requested 00:10:09.327 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.327 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.586 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.586 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.873 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:14.873 20:56:35 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:10:14.873 20:56:35 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:10:14.874 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:14.874 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:10:14.874 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:14.874 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:10:14.874 20:56:35 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:10:14.874 00:10:14.874 real 1m0.731s 00:10:14.874 user 1m19.147s 00:10:14.874 sys 0m8.995s 00:10:14.874 20:56:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:14.874 20:56:35 -- common/autotest_common.sh@10 -- # set +x 00:10:14.874 ************************************ 00:10:14.874 END TEST blockdev_nvme_gpt 00:10:14.874 ************************************ 00:10:14.874 20:56:35 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:14.874 20:56:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:14.874 20:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.874 20:56:35 -- common/autotest_common.sh@10 -- # set +x 00:10:14.874 ************************************ 00:10:14.874 START TEST nvme 00:10:14.874 ************************************ 00:10:14.874 20:56:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:15.132 * Looking for test storage... 00:10:15.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:15.132 20:56:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:15.132 20:56:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:15.132 20:56:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:15.132 20:56:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:15.132 20:56:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:15.132 20:56:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:15.132 20:56:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:15.132 20:56:36 -- scripts/common.sh@335 -- # IFS=.-: 00:10:15.132 20:56:36 -- scripts/common.sh@335 -- # read -ra ver1 00:10:15.132 20:56:36 -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.132 20:56:36 -- scripts/common.sh@336 -- # read -ra ver2 00:10:15.132 20:56:36 -- scripts/common.sh@337 -- # local 'op=<' 00:10:15.132 20:56:36 -- scripts/common.sh@339 -- # ver1_l=2 00:10:15.132 20:56:36 -- scripts/common.sh@340 -- # ver2_l=1 00:10:15.132 20:56:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:15.132 20:56:36 -- scripts/common.sh@343 -- # case "$op" in 00:10:15.132 20:56:36 -- scripts/common.sh@344 -- # : 1 00:10:15.132 20:56:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:15.132 20:56:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.132 20:56:36 -- scripts/common.sh@364 -- # decimal 1 00:10:15.132 20:56:36 -- scripts/common.sh@352 -- # local d=1 00:10:15.132 20:56:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.133 20:56:36 -- scripts/common.sh@354 -- # echo 1 00:10:15.133 20:56:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:15.133 20:56:36 -- scripts/common.sh@365 -- # decimal 2 00:10:15.133 20:56:36 -- scripts/common.sh@352 -- # local d=2 00:10:15.133 20:56:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.133 20:56:36 -- scripts/common.sh@354 -- # echo 2 00:10:15.133 20:56:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:15.133 20:56:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:15.133 20:56:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:15.133 20:56:36 -- scripts/common.sh@367 -- # return 0 00:10:15.133 20:56:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.133 20:56:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.133 --rc genhtml_branch_coverage=1 00:10:15.133 --rc genhtml_function_coverage=1 00:10:15.133 --rc genhtml_legend=1 00:10:15.133 --rc geninfo_all_blocks=1 00:10:15.133 --rc geninfo_unexecuted_blocks=1 00:10:15.133 00:10:15.133 ' 00:10:15.133 20:56:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.133 --rc genhtml_branch_coverage=1 00:10:15.133 --rc genhtml_function_coverage=1 00:10:15.133 --rc genhtml_legend=1 00:10:15.133 --rc geninfo_all_blocks=1 00:10:15.133 --rc geninfo_unexecuted_blocks=1 00:10:15.133 00:10:15.133 ' 00:10:15.133 20:56:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.133 --rc genhtml_branch_coverage=1 00:10:15.133 --rc genhtml_function_coverage=1 00:10:15.133 --rc genhtml_legend=1 00:10:15.133 --rc geninfo_all_blocks=1 00:10:15.133 --rc geninfo_unexecuted_blocks=1 00:10:15.133 00:10:15.133 ' 00:10:15.133 20:56:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.133 --rc genhtml_branch_coverage=1 00:10:15.133 --rc genhtml_function_coverage=1 00:10:15.133 --rc genhtml_legend=1 00:10:15.133 --rc geninfo_all_blocks=1 00:10:15.133 --rc geninfo_unexecuted_blocks=1 00:10:15.133 00:10:15.133 ' 00:10:15.133 20:56:36 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:16.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:16.326 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.326 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.326 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.326 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.326 20:56:37 -- nvme/nvme.sh@79 -- # uname 00:10:16.326 20:56:37 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:16.326 20:56:37 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:16.326 20:56:37 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:16.326 20:56:37 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:16.326 20:56:37 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:10:16.326 20:56:37 -- common/autotest_common.sh@1055 -- # echo 0 00:10:16.326 20:56:37 -- common/autotest_common.sh@1057 -- # stubpid=63983 00:10:16.326 20:56:37 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:16.326 Waiting for stub to ready for secondary processes... 00:10:16.326 20:56:37 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:10:16.326 20:56:37 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:16.326 20:56:37 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63983 ]] 00:10:16.326 20:56:37 -- common/autotest_common.sh@1062 -- # sleep 1s 00:10:16.585 [2024-12-08 20:56:37.389393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:16.585 [2024-12-08 20:56:37.389549] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.151 [2024-12-08 20:56:38.182684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.410 20:56:38 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:17.410 20:56:38 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63983 ]] 00:10:17.410 20:56:38 -- common/autotest_common.sh@1062 -- # sleep 1s 00:10:17.410 [2024-12-08 20:56:38.385719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.410 [2024-12-08 20:56:38.385793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.410 [2024-12-08 20:56:38.385805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.410 [2024-12-08 20:56:38.411236] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.410 [2024-12-08 20:56:38.421882] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:17.410 [2024-12-08 20:56:38.422236] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:17.410 [2024-12-08 20:56:38.434484] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.410 [2024-12-08 20:56:38.434748] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:17.410 [2024-12-08 20:56:38.434920] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:17.410 [2024-12-08 20:56:38.446325] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.410 [2024-12-08 20:56:38.446542] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:17.410 [2024-12-08 20:56:38.446695] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:17.667 [2024-12-08 20:56:38.458021] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.667 [2024-12-08 20:56:38.458293] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:17.667 [2024-12-08 20:56:38.458514] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:17.667 [2024-12-08 20:56:38.458673] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:17.667 [2024-12-08 20:56:38.458865] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:18.601 done. 00:10:18.601 20:56:39 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:18.601 20:56:39 -- common/autotest_common.sh@1064 -- # echo done. 00:10:18.601 20:56:39 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:18.601 20:56:39 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:18.601 20:56:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.601 20:56:39 -- common/autotest_common.sh@10 -- # set +x 00:10:18.601 ************************************ 00:10:18.601 START TEST nvme_reset 00:10:18.601 ************************************ 00:10:18.601 20:56:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:18.860 Initializing NVMe Controllers 00:10:18.860 Skipping QEMU NVMe SSD at 0000:00:06.0 00:10:18.860 Skipping QEMU NVMe SSD at 0000:00:07.0 00:10:18.860 Skipping QEMU NVMe SSD at 0000:00:09.0 00:10:18.860 Skipping QEMU NVMe SSD at 0000:00:08.0 00:10:18.860 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:18.860 00:10:18.860 real 0m0.315s 00:10:18.860 user 0m0.119s 00:10:18.860 sys 0m0.141s 00:10:18.860 20:56:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.860 ************************************ 00:10:18.860 END TEST nvme_reset 00:10:18.860 ************************************ 00:10:18.860 20:56:39 -- common/autotest_common.sh@10 -- # set +x 00:10:18.860 20:56:39 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:18.860 20:56:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:18.860 20:56:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.860 20:56:39 -- common/autotest_common.sh@10 -- # set +x 00:10:18.860 ************************************ 00:10:18.860 START TEST nvme_identify 00:10:18.860 ************************************ 00:10:18.860 20:56:39 -- common/autotest_common.sh@1114 -- # nvme_identify 00:10:18.860 20:56:39 -- nvme/nvme.sh@12 -- # bdfs=() 00:10:18.860 20:56:39 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:18.860 20:56:39 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:18.860 20:56:39 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:18.861 20:56:39 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:18.861 20:56:39 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:18.861 20:56:39 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:18.861 20:56:39 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:18.861 20:56:39 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:18.861 20:56:39 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:18.861 20:56:39 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:18.861 20:56:39 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:19.122 ===================================================== 00:10:19.122 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:19.122 ===================================================== 00:10:19.122 Controller Capabilities/Features 00:10:19.122 ================================ 00:10:19.122 Vendor ID: 1b36 00:10:19.122 Subsystem Vendor ID: 1af4 00:10:19.122 Serial Number: 12340 00:10:19.122 Model Number: QEMU NVMe Ctrl 00:10:19.122 Firmware Version: 8.0.0 00:10:19.122 Recommended Arb Burst: 6 00:10:19.122 IEEE OUI Identifier: 00 54 52 00:10:19.122 Multi-path I/O 00:10:19.122 May have multiple subsystem ports: No 00:10:19.122 May have multiple controllers: No 00:10:19.122 Associated with SR-IOV VF: No 00:10:19.122 Max Data Transfer Size: 524288 00:10:19.122 Max Number of Namespaces: 256 00:10:19.122 Max Number of I/O Queues: 64 00:10:19.122 NVMe Specification Version (VS): 1.4 00:10:19.122 NVMe Specification Version (Identify): 1.4 00:10:19.122 Maximum Queue Entries: 2048 00:10:19.122 Contiguous Queues Required: Yes 00:10:19.122 Arbitration Mechanisms Supported 00:10:19.123 Weighted Round Robin: Not Supported 00:10:19.123 Vendor Specific: Not Supported 00:10:19.123 Reset Timeout: 7500 ms 00:10:19.123 Doorbell Stride: 4 bytes 00:10:19.123 NVM Subsystem Reset: Not Supported 00:10:19.123 Command Sets Supported 00:10:19.123 NVM Command Set: Supported 00:10:19.123 Boot Partition: Not Supported 00:10:19.123 Memory Page Size Minimum: 4096 bytes 00:10:19.123 Memory Page Size Maximum: 65536 bytes 00:10:19.123 Persistent Memory Region: Not Supported 00:10:19.123 Optional Asynchronous Events Supported 00:10:19.123 Namespace Attribute Notices: Supported 00:10:19.123 Firmware Activation Notices: Not Supported 00:10:19.123 ANA Change Notices: Not Supported 00:10:19.123 PLE Aggregate Log Change Notices: Not Supported 00:10:19.123 LBA Status Info Alert Notices: Not Supported 00:10:19.123 EGE Aggregate Log Change Notices: Not Supported 00:10:19.123 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.123 Zone Descriptor Change Notices: Not Supported 00:10:19.123 Discovery Log Change Notices: Not Supported 00:10:19.123 Controller Attributes 00:10:19.123 128-bit Host Identifier: Not Supported 00:10:19.123 Non-Operational Permissive Mode: Not Supported 00:10:19.123 NVM Sets: Not Supported 00:10:19.123 Read Recovery Levels: Not Supported 00:10:19.123 Endurance Groups: Not Supported 00:10:19.123 Predictable Latency Mode: Not Supported 00:10:19.123 Traffic Based Keep ALive: Not Supported 00:10:19.123 Namespace Granularity: Not Supported 00:10:19.123 SQ Associations: Not Supported 00:10:19.123 UUID List: Not Supported 00:10:19.123 Multi-Domain Subsystem: Not Supported 00:10:19.123 Fixed Capacity Management: Not Supported 00:10:19.123 Variable Capacity Management: Not Supported 00:10:19.123 Delete Endurance Group: Not Supported 00:10:19.123 Delete NVM Set: Not Supported 00:10:19.123 Extended LBA Formats Supported: Supported 00:10:19.123 Flexible Data Placement Supported: Not Supported 00:10:19.123 00:10:19.123 Controller Memory Buffer Support 00:10:19.123 ================================ 00:10:19.123 Supported: No 00:10:19.123 00:10:19.123 Persistent Memory Region Support 00:10:19.123 ================================ 00:10:19.123 Supported: No 00:10:19.123 00:10:19.123 Admin Command Set Attributes 00:10:19.123 ============================ 00:10:19.123 Security Send/Receive: Not Supported 00:10:19.123 Format NVM: Supported 00:10:19.123 Firmware Activate/Download: Not Supported 00:10:19.123 Namespace Management: Supported 00:10:19.123 Device Self-Test: Not Supported 00:10:19.123 Directives: Supported 00:10:19.123 NVMe-MI: Not Supported 00:10:19.123 Virtualization Management: Not Supported 00:10:19.123 Doorbell Buffer Config: Supported 00:10:19.123 Get LBA Status Capability: Not Supported 00:10:19.123 Command & Feature Lockdown Capability: Not Supported 00:10:19.123 Abort Command Limit: 4 00:10:19.123 Async Event Request Limit: 4 00:10:19.123 Number of Firmware Slots: N/A 00:10:19.123 Firmware Slot 1 Read-Only: N/A 00:10:19.123 Firmware Activation Without Reset: N/A 00:10:19.123 Multiple Update Detection Support: N/A 00:10:19.123 Firmware Update Gr[2024-12-08 20:56:40.047261] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 64025 terminated unexpected 00:10:19.123 anularity: No Information Provided 00:10:19.123 Per-Namespace SMART Log: Yes 00:10:19.123 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.123 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:19.123 Command Effects Log Page: Supported 00:10:19.123 Get Log Page Extended Data: Supported 00:10:19.123 Telemetry Log Pages: Not Supported 00:10:19.123 Persistent Event Log Pages: Not Supported 00:10:19.123 Supported Log Pages Log Page: May Support 00:10:19.123 Commands Supported & Effects Log Page: Not Supported 00:10:19.123 Feature Identifiers & Effects Log Page:May Support 00:10:19.123 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.123 Data Area 4 for Telemetry Log: Not Supported 00:10:19.123 Error Log Page Entries Supported: 1 00:10:19.123 Keep Alive: Not Supported 00:10:19.123 00:10:19.123 NVM Command Set Attributes 00:10:19.123 ========================== 00:10:19.123 Submission Queue Entry Size 00:10:19.123 Max: 64 00:10:19.123 Min: 64 00:10:19.123 Completion Queue Entry Size 00:10:19.123 Max: 16 00:10:19.123 Min: 16 00:10:19.123 Number of Namespaces: 256 00:10:19.123 Compare Command: Supported 00:10:19.123 Write Uncorrectable Command: Not Supported 00:10:19.123 Dataset Management Command: Supported 00:10:19.123 Write Zeroes Command: Supported 00:10:19.123 Set Features Save Field: Supported 00:10:19.123 Reservations: Not Supported 00:10:19.123 Timestamp: Supported 00:10:19.123 Copy: Supported 00:10:19.123 Volatile Write Cache: Present 00:10:19.123 Atomic Write Unit (Normal): 1 00:10:19.123 Atomic Write Unit (PFail): 1 00:10:19.123 Atomic Compare & Write Unit: 1 00:10:19.123 Fused Compare & Write: Not Supported 00:10:19.123 Scatter-Gather List 00:10:19.123 SGL Command Set: Supported 00:10:19.123 SGL Keyed: Not Supported 00:10:19.123 SGL Bit Bucket Descriptor: Not Supported 00:10:19.123 SGL Metadata Pointer: Not Supported 00:10:19.123 Oversized SGL: Not Supported 00:10:19.123 SGL Metadata Address: Not Supported 00:10:19.123 SGL Offset: Not Supported 00:10:19.123 Transport SGL Data Block: Not Supported 00:10:19.123 Replay Protected Memory Block: Not Supported 00:10:19.123 00:10:19.123 Firmware Slot Information 00:10:19.123 ========================= 00:10:19.123 Active slot: 1 00:10:19.123 Slot 1 Firmware Revision: 1.0 00:10:19.123 00:10:19.123 00:10:19.123 Commands Supported and Effects 00:10:19.123 ============================== 00:10:19.123 Admin Commands 00:10:19.123 -------------- 00:10:19.123 Delete I/O Submission Queue (00h): Supported 00:10:19.123 Create I/O Submission Queue (01h): Supported 00:10:19.123 Get Log Page (02h): Supported 00:10:19.123 Delete I/O Completion Queue (04h): Supported 00:10:19.123 Create I/O Completion Queue (05h): Supported 00:10:19.123 Identify (06h): Supported 00:10:19.123 Abort (08h): Supported 00:10:19.123 Set Features (09h): Supported 00:10:19.123 Get Features (0Ah): Supported 00:10:19.123 Asynchronous Event Request (0Ch): Supported 00:10:19.123 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.123 Directive Send (19h): Supported 00:10:19.123 Directive Receive (1Ah): Supported 00:10:19.123 Virtualization Management (1Ch): Supported 00:10:19.123 Doorbell Buffer Config (7Ch): Supported 00:10:19.123 Format NVM (80h): Supported LBA-Change 00:10:19.123 I/O Commands 00:10:19.123 ------------ 00:10:19.123 Flush (00h): Supported LBA-Change 00:10:19.123 Write (01h): Supported LBA-Change 00:10:19.123 Read (02h): Supported 00:10:19.123 Compare (05h): Supported 00:10:19.123 Write Zeroes (08h): Supported LBA-Change 00:10:19.123 Dataset Management (09h): Supported LBA-Change 00:10:19.123 Unknown (0Ch): Supported 00:10:19.123 Unknown (12h): Supported 00:10:19.123 Copy (19h): Supported LBA-Change 00:10:19.123 Unknown (1Dh): Supported LBA-Change 00:10:19.123 00:10:19.123 Error Log 00:10:19.123 ========= 00:10:19.123 00:10:19.123 Arbitration 00:10:19.123 =========== 00:10:19.123 Arbitration Burst: no limit 00:10:19.123 00:10:19.123 Power Management 00:10:19.123 ================ 00:10:19.123 Number of Power States: 1 00:10:19.123 Current Power State: Power State #0 00:10:19.123 Power State #0: 00:10:19.123 Max Power: 25.00 W 00:10:19.123 Non-Operational State: Operational 00:10:19.123 Entry Latency: 16 microseconds 00:10:19.123 Exit Latency: 4 microseconds 00:10:19.123 Relative Read Throughput: 0 00:10:19.123 Relative Read Latency: 0 00:10:19.123 Relative Write Throughput: 0 00:10:19.123 Relative Write Latency: 0 00:10:19.123 Idle Power[2024-12-08 20:56:40.048795] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 64025 terminated unexpected 00:10:19.123 : Not Reported 00:10:19.123 Active Power: Not Reported 00:10:19.123 Non-Operational Permissive Mode: Not Supported 00:10:19.123 00:10:19.123 Health Information 00:10:19.123 ================== 00:10:19.123 Critical Warnings: 00:10:19.123 Available Spare Space: OK 00:10:19.123 Temperature: OK 00:10:19.123 Device Reliability: OK 00:10:19.123 Read Only: No 00:10:19.123 Volatile Memory Backup: OK 00:10:19.123 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.123 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.123 Available Spare: 0% 00:10:19.123 Available Spare Threshold: 0% 00:10:19.123 Life Percentage Used: 0% 00:10:19.123 Data Units Read: 1909 00:10:19.123 Data Units Written: 875 00:10:19.123 Host Read Commands: 86844 00:10:19.124 Host Write Commands: 42994 00:10:19.124 Controller Busy Time: 0 minutes 00:10:19.124 Power Cycles: 0 00:10:19.124 Power On Hours: 0 hours 00:10:19.124 Unsafe Shutdowns: 0 00:10:19.124 Unrecoverable Media Errors: 0 00:10:19.124 Lifetime Error Log Entries: 0 00:10:19.124 Warning Temperature Time: 0 minutes 00:10:19.124 Critical Temperature Time: 0 minutes 00:10:19.124 00:10:19.124 Number of Queues 00:10:19.124 ================ 00:10:19.124 Number of I/O Submission Queues: 64 00:10:19.124 Number of I/O Completion Queues: 64 00:10:19.124 00:10:19.124 ZNS Specific Controller Data 00:10:19.124 ============================ 00:10:19.124 Zone Append Size Limit: 0 00:10:19.124 00:10:19.124 00:10:19.124 Active Namespaces 00:10:19.124 ================= 00:10:19.124 Namespace ID:1 00:10:19.124 Error Recovery Timeout: Unlimited 00:10:19.124 Command Set Identifier: NVM (00h) 00:10:19.124 Deallocate: Supported 00:10:19.124 Deallocated/Unwritten Error: Supported 00:10:19.124 Deallocated Read Value: All 0x00 00:10:19.124 Deallocate in Write Zeroes: Not Supported 00:10:19.124 Deallocated Guard Field: 0xFFFF 00:10:19.124 Flush: Supported 00:10:19.124 Reservation: Not Supported 00:10:19.124 Metadata Transferred as: Separate Metadata Buffer 00:10:19.124 Namespace Sharing Capabilities: Private 00:10:19.124 Size (in LBAs): 1548666 (5GiB) 00:10:19.124 Capacity (in LBAs): 1548666 (5GiB) 00:10:19.124 Utilization (in LBAs): 1548666 (5GiB) 00:10:19.124 Thin Provisioning: Not Supported 00:10:19.124 Per-NS Atomic Units: No 00:10:19.124 Maximum Single Source Range Length: 128 00:10:19.124 Maximum Copy Length: 128 00:10:19.124 Maximum Source Range Count: 128 00:10:19.124 NGUID/EUI64 Never Reused: No 00:10:19.124 Namespace Write Protected: No 00:10:19.124 Number of LBA Formats: 8 00:10:19.124 Current LBA Format: LBA Format #07 00:10:19.124 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.124 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.124 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.124 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.124 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.124 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.124 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.124 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.124 00:10:19.124 ===================================================== 00:10:19.124 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:19.124 ===================================================== 00:10:19.124 Controller Capabilities/Features 00:10:19.124 ================================ 00:10:19.124 Vendor ID: 1b36 00:10:19.124 Subsystem Vendor ID: 1af4 00:10:19.124 Serial Number: 12341 00:10:19.124 Model Number: QEMU NVMe Ctrl 00:10:19.124 Firmware Version: 8.0.0 00:10:19.124 Recommended Arb Burst: 6 00:10:19.124 IEEE OUI Identifier: 00 54 52 00:10:19.124 Multi-path I/O 00:10:19.124 May have multiple subsystem ports: No 00:10:19.124 May have multiple controllers: No 00:10:19.124 Associated with SR-IOV VF: No 00:10:19.124 Max Data Transfer Size: 524288 00:10:19.124 Max Number of Namespaces: 256 00:10:19.124 Max Number of I/O Queues: 64 00:10:19.124 NVMe Specification Version (VS): 1.4 00:10:19.124 NVMe Specification Version (Identify): 1.4 00:10:19.124 Maximum Queue Entries: 2048 00:10:19.124 Contiguous Queues Required: Yes 00:10:19.124 Arbitration Mechanisms Supported 00:10:19.124 Weighted Round Robin: Not Supported 00:10:19.124 Vendor Specific: Not Supported 00:10:19.124 Reset Timeout: 7500 ms 00:10:19.124 Doorbell Stride: 4 bytes 00:10:19.124 NVM Subsystem Reset: Not Supported 00:10:19.124 Command Sets Supported 00:10:19.124 NVM Command Set: Supported 00:10:19.124 Boot Partition: Not Supported 00:10:19.124 Memory Page Size Minimum: 4096 bytes 00:10:19.124 Memory Page Size Maximum: 65536 bytes 00:10:19.124 Persistent Memory Region: Not Supported 00:10:19.124 Optional Asynchronous Events Supported 00:10:19.124 Namespace Attribute Notices: Supported 00:10:19.124 Firmware Activation Notices: Not Supported 00:10:19.124 ANA Change Notices: Not Supported 00:10:19.124 PLE Aggregate Log Change Notices: Not Supported 00:10:19.124 LBA Status Info Alert Notices: Not Supported 00:10:19.124 EGE Aggregate Log Change Notices: Not Supported 00:10:19.124 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.124 Zone Descriptor Change Notices: Not Supported 00:10:19.124 Discovery Log Change Notices: Not Supported 00:10:19.124 Controller Attributes 00:10:19.124 128-bit Host Identifier: Not Supported 00:10:19.124 Non-Operational Permissive Mode: Not Supported 00:10:19.124 NVM Sets: Not Supported 00:10:19.124 Read Recovery Levels: Not Supported 00:10:19.124 Endurance Groups: Not Supported 00:10:19.124 Predictable Latency Mode: Not Supported 00:10:19.124 Traffic Based Keep ALive: Not Supported 00:10:19.124 Namespace Granularity: Not Supported 00:10:19.124 SQ Associations: Not Supported 00:10:19.124 UUID List: Not Supported 00:10:19.124 Multi-Domain Subsystem: Not Supported 00:10:19.124 Fixed Capacity Management: Not Supported 00:10:19.124 Variable Capacity Management: Not Supported 00:10:19.124 Delete Endurance Group: Not Supported 00:10:19.124 Delete NVM Set: Not Supported 00:10:19.124 Extended LBA Formats Supported: Supported 00:10:19.124 Flexible Data Placement Supported: Not Supported 00:10:19.124 00:10:19.124 Controller Memory Buffer Support 00:10:19.124 ================================ 00:10:19.124 Supported: No 00:10:19.124 00:10:19.124 Persistent Memory Region Support 00:10:19.124 ================================ 00:10:19.124 Supported: No 00:10:19.124 00:10:19.124 Admin Command Set Attributes 00:10:19.124 ============================ 00:10:19.124 Security Send/Receive: Not Supported 00:10:19.124 Format NVM: Supported 00:10:19.124 Firmware Activate/Download: Not Supported 00:10:19.124 Namespace Management: Supported 00:10:19.124 Device Self-Test: Not Supported 00:10:19.124 Directives: Supported 00:10:19.124 NVMe-MI: Not Supported 00:10:19.124 Virtualization Management: Not Supported 00:10:19.124 Doorbell Buffer Config: Supported 00:10:19.124 Get LBA Status Capability: Not Supported 00:10:19.124 Command & Feature Lockdown Capability: Not Supported 00:10:19.124 Abort Command Limit: 4 00:10:19.124 Async Event Request Limit: 4 00:10:19.124 Number of Firmware Slots: N/A 00:10:19.124 Firmware Slot 1 Read-Only: N/A 00:10:19.124 Firmware Activation Without Reset: N/A 00:10:19.124 Multiple Update Detection Support: N/A 00:10:19.124 Firmware Update Granularity: No Information Provided 00:10:19.124 Per-Namespace SMART Log: Yes 00:10:19.124 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.124 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:19.124 Command Effects Log Page: Supported 00:10:19.124 Get Log Page Extended Data: Supported 00:10:19.124 Telemetry Log Pages: Not Supported 00:10:19.124 Persistent Event Log Pages: Not Supported 00:10:19.124 Supported Log Pages Log Page: May Support 00:10:19.124 Commands Supported & Effects Log Page: Not Supported 00:10:19.124 Feature Identifiers & Effects Log Page:May Support 00:10:19.124 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.124 Data Area 4 for Telemetry Log: Not Supported 00:10:19.124 Error Log Page Entries Supported: 1 00:10:19.124 Keep Alive: Not Supported 00:10:19.124 00:10:19.124 NVM Command Set Attributes 00:10:19.124 ========================== 00:10:19.124 Submission Queue Entry Size 00:10:19.124 Max: 64 00:10:19.124 Min: 64 00:10:19.124 Completion Queue Entry Size 00:10:19.124 Max: 16 00:10:19.124 Min: 16 00:10:19.124 Number of Namespaces: 256 00:10:19.124 Compare Command: Supported 00:10:19.124 Write Uncorrectable Command: Not Supported 00:10:19.124 Dataset Management Command: Supported 00:10:19.124 Write Zeroes Command: Supported 00:10:19.124 Set Features Save Field: Supported 00:10:19.124 Reservations: Not Supported 00:10:19.124 Timestamp: Supported 00:10:19.124 Copy: Supported 00:10:19.124 Volatile Write Cache: Present 00:10:19.124 Atomic Write Unit (Normal): 1 00:10:19.124 Atomic Write Unit (PFail): 1 00:10:19.124 Atomic Compare & Write Unit: 1 00:10:19.124 Fused Compare & Write: Not Supported 00:10:19.124 Scatter-Gather List 00:10:19.124 SGL Command Set: Supported 00:10:19.124 SGL Keyed: Not Supported 00:10:19.124 SGL Bit Bucket Descriptor: Not Supported 00:10:19.124 SGL Metadata Pointer: Not Supported 00:10:19.124 Oversized SGL: Not Supported 00:10:19.124 SGL Metadata Address: Not Supported 00:10:19.124 SGL Offset: Not Supported 00:10:19.124 Transport SGL Data Block: Not Supported 00:10:19.124 Replay Protected Memory Block: Not Supported 00:10:19.124 00:10:19.124 Firmware Slot Information 00:10:19.124 ========================= 00:10:19.124 Active slot: 1 00:10:19.124 Slot 1 Firmware Revision: 1.0 00:10:19.124 00:10:19.124 00:10:19.124 Commands Supported and Effects 00:10:19.124 ============================== 00:10:19.124 Admin Commands 00:10:19.124 -------------- 00:10:19.124 Delete I/O Submission Queue (00h): Supported 00:10:19.124 Create I/O Submission Queue (01h): Supported 00:10:19.124 Get Log Page (02h): Supported 00:10:19.124 Delete I/O Completion Queue (04h): Supported 00:10:19.124 Create I/O Completion Queue (05h): Supported 00:10:19.124 Identify (06h): Supported 00:10:19.124 Abort (08h): Supported 00:10:19.124 Set Features (09h): Supported 00:10:19.124 Get Features (0Ah): Supported 00:10:19.124 Asynchronous Event Request (0Ch): Supported 00:10:19.124 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.124 Directive Send (19h): Supported 00:10:19.124 Directive Receive (1Ah): Supported 00:10:19.124 Virtualization Management (1Ch): Supported 00:10:19.124 Doorbell Buffer Config (7Ch): Supported 00:10:19.124 Format NVM (80h): Supported LBA-Change 00:10:19.124 I/O Commands 00:10:19.124 ------------ 00:10:19.124 Flush (00h): Supported LBA-Change 00:10:19.124 Write (01h): Supported LBA-Change 00:10:19.124 Read (02h): Supported 00:10:19.124 Compare (05h): Supported 00:10:19.124 Write Zeroes (08h): Supported LBA-Change 00:10:19.124 Dataset Management (09h): Supported LBA-Change 00:10:19.125 Unknown (0Ch): Supported 00:10:19.125 Unknown (12h): Supported 00:10:19.125 Copy (19h): Supported LBA-Change 00:10:19.125 Unknown (1Dh): Supported LBA-Change 00:10:19.125 00:10:19.125 Error Log 00:10:19.125 ========= 00:10:19.125 00:10:19.125 Arbitration 00:10:19.125 =========== 00:10:19.125 Arbitration Burst: no limit 00:10:19.125 00:10:19.125 Power Management 00:10:19.125 ================ 00:10:19.125 Number of Power States: 1 00:10:19.125 Current Power State: Power State #0 00:10:19.125 Power State #0: 00:10:19.125 Max Power: 25.00 W 00:10:19.125 Non-Operational State: Operational 00:10:19.125 Entry Latency: 16 microseconds 00:10:19.125 Exit Latency: 4 microseconds 00:10:19.125 Relative Read Throughput: 0 00:10:19.125 Relative Read Latency: 0 00:10:19.125 Relative Write Throughput: 0 00:10:19.125 Relative Write Latency: 0 00:10:19.125 Idle Power: Not Reported 00:10:19.125 Active Power: Not Reported 00:10:19.125 Non-Operational Permissive Mode: Not Supported 00:10:19.125 00:10:19.125 Health Information 00:10:19.125 ================== 00:10:19.125 Critical Warnings: 00:10:19.125 Available Spare Space: OK 00:10:19.125 Temperature: OK 00:10:19.125 Device Reliability: OK 00:10:19.125 Read Only: No 00:10:19.125 Volatile Memory Backup: OK 00:10:19.125 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.125 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.125 Available Spare: 0% 00:10:19.125 Available Spare Threshold: 0% 00:10:19.125 Life Percentage Used: 0% 00:10:19.125 Data Units Read: 1289 00:10:19.125 Data Units Written: 595 00:10:19.125 Host Read Commands: 59879 00:10:19.125 Host Write Commands: 29399 00:10:19.125 Controller Busy Time: 0 minutes 00:10:19.125 Power Cycles: 0 00:10:19.125 Power On Hours: 0 hours 00:10:19.125 Unsafe Shutdowns: 0 00:10:19.125 Unrecoverable Media Errors: 0 00:10:19.125 Lifetime Error Log Entries: 0 00:10:19.125 Warning Temperature Time: 0 minutes 00:10:19.125 Critical Temperature Time: 0 minutes 00:10:19.125 00:10:19.125 Number of Queues 00:10:19.125 ================ 00:10:19.125 Number of I/O Submission Queues: 64 00:10:19.125 Number of I/O Completion Queues: 64 00:10:19.125 00:10:19.125 ZNS Specific Controller Data 00:10:19.125 ============================ 00:10:19.125 Zone Append Size Limit: 0 00:10:19.125 00:10:19.125 00:10:19.125 Active Namespaces 00:10:19.125 ================= 00:10:19.125 Namespace ID:1 00:10:19.125 Error Recovery Timeout: Unlimited 00:10:19.125 Command Set Identifier: [2024-12-08 20:56:40.049751] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 64025 terminated unexpected 00:10:19.125 NVM (00h) 00:10:19.125 Deallocate: Supported 00:10:19.125 Deallocated/Unwritten Error: Supported 00:10:19.125 Deallocated Read Value: All 0x00 00:10:19.125 Deallocate in Write Zeroes: Not Supported 00:10:19.125 Deallocated Guard Field: 0xFFFF 00:10:19.125 Flush: Supported 00:10:19.125 Reservation: Not Supported 00:10:19.125 Namespace Sharing Capabilities: Private 00:10:19.125 Size (in LBAs): 1310720 (5GiB) 00:10:19.125 Capacity (in LBAs): 1310720 (5GiB) 00:10:19.125 Utilization (in LBAs): 1310720 (5GiB) 00:10:19.125 Thin Provisioning: Not Supported 00:10:19.125 Per-NS Atomic Units: No 00:10:19.125 Maximum Single Source Range Length: 128 00:10:19.125 Maximum Copy Length: 128 00:10:19.125 Maximum Source Range Count: 128 00:10:19.125 NGUID/EUI64 Never Reused: No 00:10:19.125 Namespace Write Protected: No 00:10:19.125 Number of LBA Formats: 8 00:10:19.125 Current LBA Format: LBA Format #04 00:10:19.125 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.125 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.125 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.125 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.125 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.125 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.125 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.125 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.125 00:10:19.125 ===================================================== 00:10:19.125 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:19.125 ===================================================== 00:10:19.125 Controller Capabilities/Features 00:10:19.125 ================================ 00:10:19.125 Vendor ID: 1b36 00:10:19.125 Subsystem Vendor ID: 1af4 00:10:19.125 Serial Number: 12343 00:10:19.125 Model Number: QEMU NVMe Ctrl 00:10:19.125 Firmware Version: 8.0.0 00:10:19.125 Recommended Arb Burst: 6 00:10:19.125 IEEE OUI Identifier: 00 54 52 00:10:19.125 Multi-path I/O 00:10:19.125 May have multiple subsystem ports: No 00:10:19.125 May have multiple controllers: Yes 00:10:19.125 Associated with SR-IOV VF: No 00:10:19.125 Max Data Transfer Size: 524288 00:10:19.125 Max Number of Namespaces: 256 00:10:19.125 Max Number of I/O Queues: 64 00:10:19.125 NVMe Specification Version (VS): 1.4 00:10:19.125 NVMe Specification Version (Identify): 1.4 00:10:19.125 Maximum Queue Entries: 2048 00:10:19.125 Contiguous Queues Required: Yes 00:10:19.125 Arbitration Mechanisms Supported 00:10:19.125 Weighted Round Robin: Not Supported 00:10:19.125 Vendor Specific: Not Supported 00:10:19.125 Reset Timeout: 7500 ms 00:10:19.125 Doorbell Stride: 4 bytes 00:10:19.125 NVM Subsystem Reset: Not Supported 00:10:19.125 Command Sets Supported 00:10:19.125 NVM Command Set: Supported 00:10:19.125 Boot Partition: Not Supported 00:10:19.125 Memory Page Size Minimum: 4096 bytes 00:10:19.125 Memory Page Size Maximum: 65536 bytes 00:10:19.125 Persistent Memory Region: Not Supported 00:10:19.125 Optional Asynchronous Events Supported 00:10:19.125 Namespace Attribute Notices: Supported 00:10:19.125 Firmware Activation Notices: Not Supported 00:10:19.125 ANA Change Notices: Not Supported 00:10:19.125 PLE Aggregate Log Change Notices: Not Supported 00:10:19.125 LBA Status Info Alert Notices: Not Supported 00:10:19.125 EGE Aggregate Log Change Notices: Not Supported 00:10:19.125 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.125 Zone Descriptor Change Notices: Not Supported 00:10:19.125 Discovery Log Change Notices: Not Supported 00:10:19.125 Controller Attributes 00:10:19.125 128-bit Host Identifier: Not Supported 00:10:19.125 Non-Operational Permissive Mode: Not Supported 00:10:19.125 NVM Sets: Not Supported 00:10:19.125 Read Recovery Levels: Not Supported 00:10:19.125 Endurance Groups: Supported 00:10:19.125 Predictable Latency Mode: Not Supported 00:10:19.125 Traffic Based Keep ALive: Not Supported 00:10:19.125 Namespace Granularity: Not Supported 00:10:19.125 SQ Associations: Not Supported 00:10:19.125 UUID List: Not Supported 00:10:19.125 Multi-Domain Subsystem: Not Supported 00:10:19.125 Fixed Capacity Management: Not Supported 00:10:19.125 Variable Capacity Management: Not Supported 00:10:19.125 Delete Endurance Group: Not Supported 00:10:19.125 Delete NVM Set: Not Supported 00:10:19.125 Extended LBA Formats Supported: Supported 00:10:19.125 Flexible Data Placement Supported: Supported 00:10:19.125 00:10:19.125 Controller Memory Buffer Support 00:10:19.125 ================================ 00:10:19.126 Supported: No 00:10:19.126 00:10:19.126 Persistent Memory Region Support 00:10:19.126 ================================ 00:10:19.126 Supported: No 00:10:19.126 00:10:19.126 Admin Command Set Attributes 00:10:19.126 ============================ 00:10:19.126 Security Send/Receive: Not Supported 00:10:19.126 Format NVM: Supported 00:10:19.126 Firmware Activate/Download: Not Supported 00:10:19.126 Namespace Management: Supported 00:10:19.126 Device Self-Test: Not Supported 00:10:19.126 Directives: Supported 00:10:19.126 NVMe-MI: Not Supported 00:10:19.126 Virtualization Management: Not Supported 00:10:19.126 Doorbell Buffer Config: Supported 00:10:19.126 Get LBA Status Capability: Not Supported 00:10:19.126 Command & Feature Lockdown Capability: Not Supported 00:10:19.126 Abort Command Limit: 4 00:10:19.126 Async Event Request Limit: 4 00:10:19.126 Number of Firmware Slots: N/A 00:10:19.126 Firmware Slot 1 Read-Only: N/A 00:10:19.126 Firmware Activation Without Reset: N/A 00:10:19.126 Multiple Update Detection Support: N/A 00:10:19.126 Firmware Update Granularity: No Information Provided 00:10:19.126 Per-Namespace SMART Log: Yes 00:10:19.126 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.126 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:19.126 Command Effects Log Page: Supported 00:10:19.126 Get Log Page Extended Data: Supported 00:10:19.126 Telemetry Log Pages: Not Supported 00:10:19.126 Persistent Event Log Pages: Not Supported 00:10:19.126 Supported Log Pages Log Page: May Support 00:10:19.126 Commands Supported & Effects Log Page: Not Supported 00:10:19.126 Feature Identifiers & Effects Log Page:May Support 00:10:19.126 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.126 Data Area 4 for Telemetry Log: Not Supported 00:10:19.126 Error Log Page Entries Supported: 1 00:10:19.126 Keep Alive: Not Supported 00:10:19.126 00:10:19.126 NVM Command Set Attributes 00:10:19.126 ========================== 00:10:19.126 Submission Queue Entry Size 00:10:19.126 Max: 64 00:10:19.126 Min: 64 00:10:19.126 Completion Queue Entry Size 00:10:19.126 Max: 16 00:10:19.126 Min: 16 00:10:19.126 Number of Namespaces: 256 00:10:19.126 Compare Command: Supported 00:10:19.126 Write Uncorrectable Command: Not Supported 00:10:19.126 Dataset Management Command: Supported 00:10:19.126 Write Zeroes Command: Supported 00:10:19.126 Set Features Save Field: Supported 00:10:19.126 Reservations: Not Supported 00:10:19.126 Timestamp: Supported 00:10:19.126 Copy: Supported 00:10:19.126 Volatile Write Cache: Present 00:10:19.126 Atomic Write Unit (Normal): 1 00:10:19.126 Atomic Write Unit (PFail): 1 00:10:19.126 Atomic Compare & Write Unit: 1 00:10:19.126 Fused Compare & Write: Not Supported 00:10:19.126 Scatter-Gather List 00:10:19.126 SGL Command Set: Supported 00:10:19.126 SGL Keyed: Not Supported 00:10:19.126 SGL Bit Bucket Descriptor: Not Supported 00:10:19.126 SGL Metadata Pointer: Not Supported 00:10:19.126 Oversized SGL: Not Supported 00:10:19.126 SGL Metadata Address: Not Supported 00:10:19.126 SGL Offset: Not Supported 00:10:19.126 Transport SGL Data Block: Not Supported 00:10:19.126 Replay Protected Memory Block: Not Supported 00:10:19.126 00:10:19.126 Firmware Slot Information 00:10:19.126 ========================= 00:10:19.126 Active slot: 1 00:10:19.126 Slot 1 Firmware Revision: 1.0 00:10:19.126 00:10:19.126 00:10:19.126 Commands Supported and Effects 00:10:19.126 ============================== 00:10:19.126 Admin Commands 00:10:19.126 -------------- 00:10:19.126 Delete I/O Submission Queue (00h): Supported 00:10:19.126 Create I/O Submission Queue (01h): Supported 00:10:19.126 Get Log Page (02h): Supported 00:10:19.126 Delete I/O Completion Queue (04h): Supported 00:10:19.126 Create I/O Completion Queue (05h): Supported 00:10:19.126 Identify (06h): Supported 00:10:19.126 Abort (08h): Supported 00:10:19.126 Set Features (09h): Supported 00:10:19.126 Get Features (0Ah): Supported 00:10:19.126 Asynchronous Event Request (0Ch): Supported 00:10:19.126 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.126 Directive Send (19h): Supported 00:10:19.126 Directive Receive (1Ah): Supported 00:10:19.126 Virtualization Management (1Ch): Supported 00:10:19.126 Doorbell Buffer Config (7Ch): Supported 00:10:19.126 Format NVM (80h): Supported LBA-Change 00:10:19.126 I/O Commands 00:10:19.126 ------------ 00:10:19.126 Flush (00h): Supported LBA-Change 00:10:19.126 Write (01h): Supported LBA-Change 00:10:19.126 Read (02h): Supported 00:10:19.126 Compare (05h): Supported 00:10:19.126 Write Zeroes (08h): Supported LBA-Change 00:10:19.126 Dataset Management (09h): Supported LBA-Change 00:10:19.126 Unknown (0Ch): Supported 00:10:19.126 Unknown (12h): Supported 00:10:19.126 Copy (19h): Supported LBA-Change 00:10:19.126 Unknown (1Dh): Supported LBA-Change 00:10:19.126 00:10:19.126 Error Log 00:10:19.126 ========= 00:10:19.126 00:10:19.126 Arbitration 00:10:19.126 =========== 00:10:19.126 Arbitration Burst: no limit 00:10:19.126 00:10:19.126 Power Management 00:10:19.126 ================ 00:10:19.126 Number of Power States: 1 00:10:19.126 Current Power State: Power State #0 00:10:19.126 Power State #0: 00:10:19.126 Max Power: 25.00 W 00:10:19.126 Non-Operational State: Operational 00:10:19.126 Entry Latency: 16 microseconds 00:10:19.126 Exit Latency: 4 microseconds 00:10:19.126 Relative Read Throughput: 0 00:10:19.126 Relative Read Latency: 0 00:10:19.126 Relative Write Throughput: 0 00:10:19.126 Relative Write Latency: 0 00:10:19.126 Idle Power: Not Reported 00:10:19.126 Active Power: Not Reported 00:10:19.126 Non-Operational Permissive Mode: Not Supported 00:10:19.126 00:10:19.126 Health Information 00:10:19.126 ================== 00:10:19.126 Critical Warnings: 00:10:19.126 Available Spare Space: OK 00:10:19.126 Temperature: OK 00:10:19.126 Device Reliability: OK 00:10:19.126 Read Only: No 00:10:19.126 Volatile Memory Backup: OK 00:10:19.126 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.126 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.126 Available Spare: 0% 00:10:19.126 Available Spare Threshold: 0% 00:10:19.126 Life Percentage Used: 0% 00:10:19.126 Data Units Read: 1354 00:10:19.126 Data Units Written: 626 00:10:19.126 Host Read Commands: 60530 00:10:19.126 Host Write Commands: 29707 00:10:19.126 Controller Busy Time: 0 minutes 00:10:19.126 Power Cycles: 0 00:10:19.126 Power On Hours: 0 hours 00:10:19.126 Unsafe Shutdowns: 0 00:10:19.126 Unrecoverable Media Errors: 0 00:10:19.126 Lifetime Error Log Entries: 0 00:10:19.126 Warning Temperature Time: 0 minutes 00:10:19.126 Critical Temperature Time: 0 minutes 00:10:19.126 00:10:19.126 Number of Queues 00:10:19.126 ================ 00:10:19.126 Number of I/O Submission Queues: 64 00:10:19.126 Number of I/O Completion Queues: 64 00:10:19.126 00:10:19.126 ZNS Specific Controller Data 00:10:19.126 ============================ 00:10:19.126 Zone Append Size Limit: 0 00:10:19.126 00:10:19.126 00:10:19.126 Active Namespaces 00:10:19.126 ================= 00:10:19.126 Namespace ID:1 00:10:19.126 Error Recovery Timeout: Unlimited 00:10:19.126 Command Set Identifier: NVM (00h) 00:10:19.126 Deallocate: Supported 00:10:19.126 Deallocated/Unwritten Error: Supported 00:10:19.126 Deallocated Read Value: All 0x00 00:10:19.126 Deallocate in Write Zeroes: Not Supported 00:10:19.126 Deallocated Guard Field: 0xFFFF 00:10:19.126 Flush: Supported 00:10:19.126 Reservation: Not Supported 00:10:19.126 Namespace Sharing Capabilities: Multiple Controllers 00:10:19.126 Size (in LBAs): 262144 (1GiB) 00:10:19.126 Capacity (in LBAs): 262144 (1GiB) 00:10:19.126 Utilization (in LBAs): 262144 (1GiB) 00:10:19.126 Thin Provisioning: Not Supported 00:10:19.126 Per-NS Atomic Units: No 00:10:19.126 Maximum Single Source Range Length: 128 00:10:19.126 Maximum Copy Length: 128 00:10:19.126 Maximum Source Range Count: 128 00:10:19.126 NGUID/EUI64 Never Reused: No 00:10:19.126 Namespace Write Protected: No 00:10:19.126 Endurance group ID: 1 00:10:19.126 Number of LBA Formats: 8 00:10:19.126 Current LBA Format: LBA Format #04 00:10:19.126 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.126 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.126 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.126 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.127 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.127 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.127 LBA Format #06: Data Si[2024-12-08 20:56:40.051481] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 64025 terminated unexpected 00:10:19.127 ze: 4096 Metadata Size: 16 00:10:19.127 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.127 00:10:19.127 Get Feature FDP: 00:10:19.127 ================ 00:10:19.127 Enabled: Yes 00:10:19.127 FDP configuration index: 0 00:10:19.127 00:10:19.127 FDP configurations log page 00:10:19.127 =========================== 00:10:19.127 Number of FDP configurations: 1 00:10:19.127 Version: 0 00:10:19.127 Size: 112 00:10:19.127 FDP Configuration Descriptor: 0 00:10:19.127 Descriptor Size: 96 00:10:19.127 Reclaim Group Identifier format: 2 00:10:19.127 FDP Volatile Write Cache: Not Present 00:10:19.127 FDP Configuration: Valid 00:10:19.127 Vendor Specific Size: 0 00:10:19.127 Number of Reclaim Groups: 2 00:10:19.127 Number of Recalim Unit Handles: 8 00:10:19.127 Max Placement Identifiers: 128 00:10:19.127 Number of Namespaces Suppprted: 256 00:10:19.127 Reclaim unit Nominal Size: 6000000 bytes 00:10:19.127 Estimated Reclaim Unit Time Limit: Not Reported 00:10:19.127 RUH Desc #000: RUH Type: Initially Isolated 00:10:19.127 RUH Desc #001: RUH Type: Initially Isolated 00:10:19.127 RUH Desc #002: RUH Type: Initially Isolated 00:10:19.127 RUH Desc #003: RUH Type: Initially Isolated 00:10:19.127 RUH Desc #004: RUH Type: Initially Isolated 00:10:19.127 RUH Desc #005: RUH Type: Initially Isolated 00:10:19.127 RUH Desc #006: RUH Type: Initially Isolated 00:10:19.127 RUH Desc #007: RUH Type: Initially Isolated 00:10:19.127 00:10:19.127 FDP reclaim unit handle usage log page 00:10:19.127 ====================================== 00:10:19.127 Number of Reclaim Unit Handles: 8 00:10:19.127 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:19.127 RUH Usage Desc #001: RUH Attributes: Unused 00:10:19.127 RUH Usage Desc #002: RUH Attributes: Unused 00:10:19.127 RUH Usage Desc #003: RUH Attributes: Unused 00:10:19.127 RUH Usage Desc #004: RUH Attributes: Unused 00:10:19.127 RUH Usage Desc #005: RUH Attributes: Unused 00:10:19.127 RUH Usage Desc #006: RUH Attributes: Unused 00:10:19.127 RUH Usage Desc #007: RUH Attributes: Unused 00:10:19.127 00:10:19.127 FDP statistics log page 00:10:19.127 ======================= 00:10:19.127 Host bytes with metadata written: 404725760 00:10:19.127 Media bytes with metadata written: 404787200 00:10:19.127 Media bytes erased: 0 00:10:19.127 00:10:19.127 FDP events log page 00:10:19.127 =================== 00:10:19.127 Number of FDP events: 0 00:10:19.127 00:10:19.127 ===================================================== 00:10:19.127 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:19.127 ===================================================== 00:10:19.127 Controller Capabilities/Features 00:10:19.127 ================================ 00:10:19.127 Vendor ID: 1b36 00:10:19.127 Subsystem Vendor ID: 1af4 00:10:19.127 Serial Number: 12342 00:10:19.127 Model Number: QEMU NVMe Ctrl 00:10:19.127 Firmware Version: 8.0.0 00:10:19.127 Recommended Arb Burst: 6 00:10:19.127 IEEE OUI Identifier: 00 54 52 00:10:19.127 Multi-path I/O 00:10:19.127 May have multiple subsystem ports: No 00:10:19.127 May have multiple controllers: No 00:10:19.127 Associated with SR-IOV VF: No 00:10:19.127 Max Data Transfer Size: 524288 00:10:19.127 Max Number of Namespaces: 256 00:10:19.127 Max Number of I/O Queues: 64 00:10:19.127 NVMe Specification Version (VS): 1.4 00:10:19.127 NVMe Specification Version (Identify): 1.4 00:10:19.127 Maximum Queue Entries: 2048 00:10:19.127 Contiguous Queues Required: Yes 00:10:19.127 Arbitration Mechanisms Supported 00:10:19.127 Weighted Round Robin: Not Supported 00:10:19.127 Vendor Specific: Not Supported 00:10:19.127 Reset Timeout: 7500 ms 00:10:19.127 Doorbell Stride: 4 bytes 00:10:19.127 NVM Subsystem Reset: Not Supported 00:10:19.127 Command Sets Supported 00:10:19.127 NVM Command Set: Supported 00:10:19.127 Boot Partition: Not Supported 00:10:19.127 Memory Page Size Minimum: 4096 bytes 00:10:19.127 Memory Page Size Maximum: 65536 bytes 00:10:19.127 Persistent Memory Region: Not Supported 00:10:19.127 Optional Asynchronous Events Supported 00:10:19.127 Namespace Attribute Notices: Supported 00:10:19.127 Firmware Activation Notices: Not Supported 00:10:19.127 ANA Change Notices: Not Supported 00:10:19.127 PLE Aggregate Log Change Notices: Not Supported 00:10:19.127 LBA Status Info Alert Notices: Not Supported 00:10:19.127 EGE Aggregate Log Change Notices: Not Supported 00:10:19.127 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.127 Zone Descriptor Change Notices: Not Supported 00:10:19.127 Discovery Log Change Notices: Not Supported 00:10:19.127 Controller Attributes 00:10:19.127 128-bit Host Identifier: Not Supported 00:10:19.127 Non-Operational Permissive Mode: Not Supported 00:10:19.127 NVM Sets: Not Supported 00:10:19.127 Read Recovery Levels: Not Supported 00:10:19.127 Endurance Groups: Not Supported 00:10:19.127 Predictable Latency Mode: Not Supported 00:10:19.127 Traffic Based Keep ALive: Not Supported 00:10:19.127 Namespace Granularity: Not Supported 00:10:19.127 SQ Associations: Not Supported 00:10:19.127 UUID List: Not Supported 00:10:19.127 Multi-Domain Subsystem: Not Supported 00:10:19.127 Fixed Capacity Management: Not Supported 00:10:19.127 Variable Capacity Management: Not Supported 00:10:19.127 Delete Endurance Group: Not Supported 00:10:19.127 Delete NVM Set: Not Supported 00:10:19.127 Extended LBA Formats Supported: Supported 00:10:19.127 Flexible Data Placement Supported: Not Supported 00:10:19.127 00:10:19.127 Controller Memory Buffer Support 00:10:19.127 ================================ 00:10:19.127 Supported: No 00:10:19.127 00:10:19.127 Persistent Memory Region Support 00:10:19.127 ================================ 00:10:19.127 Supported: No 00:10:19.127 00:10:19.127 Admin Command Set Attributes 00:10:19.127 ============================ 00:10:19.127 Security Send/Receive: Not Supported 00:10:19.127 Format NVM: Supported 00:10:19.127 Firmware Activate/Download: Not Supported 00:10:19.127 Namespace Management: Supported 00:10:19.127 Device Self-Test: Not Supported 00:10:19.127 Directives: Supported 00:10:19.127 NVMe-MI: Not Supported 00:10:19.127 Virtualization Management: Not Supported 00:10:19.127 Doorbell Buffer Config: Supported 00:10:19.127 Get LBA Status Capability: Not Supported 00:10:19.127 Command & Feature Lockdown Capability: Not Supported 00:10:19.127 Abort Command Limit: 4 00:10:19.127 Async Event Request Limit: 4 00:10:19.127 Number of Firmware Slots: N/A 00:10:19.127 Firmware Slot 1 Read-Only: N/A 00:10:19.128 Firmware Activation Without Reset: N/A 00:10:19.128 Multiple Update Detection Support: N/A 00:10:19.128 Firmware Update Granularity: No Information Provided 00:10:19.128 Per-Namespace SMART Log: Yes 00:10:19.128 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.128 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:19.128 Command Effects Log Page: Supported 00:10:19.128 Get Log Page Extended Data: Supported 00:10:19.128 Telemetry Log Pages: Not Supported 00:10:19.128 Persistent Event Log Pages: Not Supported 00:10:19.128 Supported Log Pages Log Page: May Support 00:10:19.128 Commands Supported & Effects Log Page: Not Supported 00:10:19.128 Feature Identifiers & Effects Log Page:May Support 00:10:19.128 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.128 Data Area 4 for Telemetry Log: Not Supported 00:10:19.128 Error Log Page Entries Supported: 1 00:10:19.128 Keep Alive: Not Supported 00:10:19.128 00:10:19.128 NVM Command Set Attributes 00:10:19.128 ========================== 00:10:19.128 Submission Queue Entry Size 00:10:19.128 Max: 64 00:10:19.128 Min: 64 00:10:19.128 Completion Queue Entry Size 00:10:19.128 Max: 16 00:10:19.128 Min: 16 00:10:19.128 Number of Namespaces: 256 00:10:19.128 Compare Command: Supported 00:10:19.128 Write Uncorrectable Command: Not Supported 00:10:19.128 Dataset Management Command: Supported 00:10:19.128 Write Zeroes Command: Supported 00:10:19.128 Set Features Save Field: Supported 00:10:19.128 Reservations: Not Supported 00:10:19.128 Timestamp: Supported 00:10:19.128 Copy: Supported 00:10:19.128 Volatile Write Cache: Present 00:10:19.128 Atomic Write Unit (Normal): 1 00:10:19.128 Atomic Write Unit (PFail): 1 00:10:19.128 Atomic Compare & Write Unit: 1 00:10:19.128 Fused Compare & Write: Not Supported 00:10:19.128 Scatter-Gather List 00:10:19.128 SGL Command Set: Supported 00:10:19.128 SGL Keyed: Not Supported 00:10:19.128 SGL Bit Bucket Descriptor: Not Supported 00:10:19.128 SGL Metadata Pointer: Not Supported 00:10:19.128 Oversized SGL: Not Supported 00:10:19.128 SGL Metadata Address: Not Supported 00:10:19.128 SGL Offset: Not Supported 00:10:19.128 Transport SGL Data Block: Not Supported 00:10:19.128 Replay Protected Memory Block: Not Supported 00:10:19.128 00:10:19.128 Firmware Slot Information 00:10:19.128 ========================= 00:10:19.128 Active slot: 1 00:10:19.128 Slot 1 Firmware Revision: 1.0 00:10:19.128 00:10:19.128 00:10:19.128 Commands Supported and Effects 00:10:19.128 ============================== 00:10:19.128 Admin Commands 00:10:19.128 -------------- 00:10:19.128 Delete I/O Submission Queue (00h): Supported 00:10:19.128 Create I/O Submission Queue (01h): Supported 00:10:19.128 Get Log Page (02h): Supported 00:10:19.128 Delete I/O Completion Queue (04h): Supported 00:10:19.128 Create I/O Completion Queue (05h): Supported 00:10:19.128 Identify (06h): Supported 00:10:19.128 Abort (08h): Supported 00:10:19.128 Set Features (09h): Supported 00:10:19.128 Get Features (0Ah): Supported 00:10:19.128 Asynchronous Event Request (0Ch): Supported 00:10:19.128 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.128 Directive Send (19h): Supported 00:10:19.128 Directive Receive (1Ah): Supported 00:10:19.128 Virtualization Management (1Ch): Supported 00:10:19.128 Doorbell Buffer Config (7Ch): Supported 00:10:19.128 Format NVM (80h): Supported LBA-Change 00:10:19.128 I/O Commands 00:10:19.128 ------------ 00:10:19.128 Flush (00h): Supported LBA-Change 00:10:19.128 Write (01h): Supported LBA-Change 00:10:19.128 Read (02h): Supported 00:10:19.128 Compare (05h): Supported 00:10:19.128 Write Zeroes (08h): Supported LBA-Change 00:10:19.128 Dataset Management (09h): Supported LBA-Change 00:10:19.128 Unknown (0Ch): Supported 00:10:19.128 Unknown (12h): Supported 00:10:19.128 Copy (19h): Supported LBA-Change 00:10:19.128 Unknown (1Dh): Supported LBA-Change 00:10:19.128 00:10:19.128 Error Log 00:10:19.128 ========= 00:10:19.128 00:10:19.128 Arbitration 00:10:19.128 =========== 00:10:19.128 Arbitration Burst: no limit 00:10:19.128 00:10:19.128 Power Management 00:10:19.128 ================ 00:10:19.128 Number of Power States: 1 00:10:19.128 Current Power State: Power State #0 00:10:19.128 Power State #0: 00:10:19.128 Max Power: 25.00 W 00:10:19.128 Non-Operational State: Operational 00:10:19.128 Entry Latency: 16 microseconds 00:10:19.128 Exit Latency: 4 microseconds 00:10:19.128 Relative Read Throughput: 0 00:10:19.128 Relative Read Latency: 0 00:10:19.128 Relative Write Throughput: 0 00:10:19.128 Relative Write Latency: 0 00:10:19.128 Idle Power: Not Reported 00:10:19.128 Active Power: Not Reported 00:10:19.128 Non-Operational Permissive Mode: Not Supported 00:10:19.128 00:10:19.128 Health Information 00:10:19.128 ================== 00:10:19.128 Critical Warnings: 00:10:19.128 Available Spare Space: OK 00:10:19.128 Temperature: OK 00:10:19.128 Device Reliability: OK 00:10:19.128 Read Only: No 00:10:19.128 Volatile Memory Backup: OK 00:10:19.128 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.128 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.128 Available Spare: 0% 00:10:19.128 Available Spare Threshold: 0% 00:10:19.128 Life Percentage Used: 0% 00:10:19.128 Data Units Read: 3948 00:10:19.128 Data Units Written: 1816 00:10:19.128 Host Read Commands: 180806 00:10:19.128 Host Write Commands: 88578 00:10:19.128 Controller Busy Time: 0 minutes 00:10:19.128 Power Cycles: 0 00:10:19.128 Power On Hours: 0 hours 00:10:19.128 Unsafe Shutdowns: 0 00:10:19.128 Unrecoverable Media Errors: 0 00:10:19.128 Lifetime Error Log Entries: 0 00:10:19.128 Warning Temperature Time: 0 minutes 00:10:19.128 Critical Temperature Time: 0 minutes 00:10:19.128 00:10:19.128 Number of Queues 00:10:19.128 ================ 00:10:19.128 Number of I/O Submission Queues: 64 00:10:19.128 Number of I/O Completion Queues: 64 00:10:19.128 00:10:19.128 ZNS Specific Controller Data 00:10:19.128 ============================ 00:10:19.128 Zone Append Size Limit: 0 00:10:19.128 00:10:19.128 00:10:19.128 Active Namespaces 00:10:19.128 ================= 00:10:19.128 Namespace ID:1 00:10:19.128 Error Recovery Timeout: Unlimited 00:10:19.128 Command Set Identifier: NVM (00h) 00:10:19.128 Deallocate: Supported 00:10:19.128 Deallocated/Unwritten Error: Supported 00:10:19.128 Deallocated Read Value: All 0x00 00:10:19.128 Deallocate in Write Zeroes: Not Supported 00:10:19.128 Deallocated Guard Field: 0xFFFF 00:10:19.128 Flush: Supported 00:10:19.128 Reservation: Not Supported 00:10:19.128 Namespace Sharing Capabilities: Private 00:10:19.128 Size (in LBAs): 1048576 (4GiB) 00:10:19.128 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.128 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.128 Thin Provisioning: Not Supported 00:10:19.128 Per-NS Atomic Units: No 00:10:19.128 Maximum Single Source Range Length: 128 00:10:19.128 Maximum Copy Length: 128 00:10:19.128 Maximum Source Range Count: 128 00:10:19.128 NGUID/EUI64 Never Reused: No 00:10:19.128 Namespace Write Protected: No 00:10:19.128 Number of LBA Formats: 8 00:10:19.128 Current LBA Format: LBA Format #04 00:10:19.128 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.128 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.128 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.128 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.128 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.128 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.128 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.128 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.128 00:10:19.128 Namespace ID:2 00:10:19.128 Error Recovery Timeout: Unlimited 00:10:19.128 Command Set Identifier: NVM (00h) 00:10:19.128 Deallocate: Supported 00:10:19.128 Deallocated/Unwritten Error: Supported 00:10:19.128 Deallocated Read Value: All 0x00 00:10:19.128 Deallocate in Write Zeroes: Not Supported 00:10:19.128 Deallocated Guard Field: 0xFFFF 00:10:19.128 Flush: Supported 00:10:19.128 Reservation: Not Supported 00:10:19.129 Namespace Sharing Capabilities: Private 00:10:19.129 Size (in LBAs): 1048576 (4GiB) 00:10:19.129 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.129 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.129 Thin Provisioning: Not Supported 00:10:19.129 Per-NS Atomic Units: No 00:10:19.129 Maximum Single Source Range Length: 128 00:10:19.129 Maximum Copy Length: 128 00:10:19.129 Maximum Source Range Count: 128 00:10:19.129 NGUID/EUI64 Never Reused: No 00:10:19.129 Namespace Write Protected: No 00:10:19.129 Number of LBA Formats: 8 00:10:19.129 Current LBA Format: LBA Format #04 00:10:19.129 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.129 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.129 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.129 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.129 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.129 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.129 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.129 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.129 00:10:19.129 Namespace ID:3 00:10:19.129 Error Recovery Timeout: Unlimited 00:10:19.129 Command Set Identifier: NVM (00h) 00:10:19.129 Deallocate: Supported 00:10:19.129 Deallocated/Unwritten Error: Supported 00:10:19.129 Deallocated Read Value: All 0x00 00:10:19.129 Deallocate in Write Zeroes: Not Supported 00:10:19.129 Deallocated Guard Field: 0xFFFF 00:10:19.129 Flush: Supported 00:10:19.129 Reservation: Not Supported 00:10:19.129 Namespace Sharing Capabilities: Private 00:10:19.129 Size (in LBAs): 1048576 (4GiB) 00:10:19.129 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.129 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.129 Thin Provisioning: Not Supported 00:10:19.129 Per-NS Atomic Units: No 00:10:19.129 Maximum Single Source Range Length: 128 00:10:19.129 Maximum Copy Length: 128 00:10:19.129 Maximum Source Range Count: 128 00:10:19.129 NGUID/EUI64 Never Reused: No 00:10:19.129 Namespace Write Protected: No 00:10:19.129 Number of LBA Formats: 8 00:10:19.129 Current LBA Format: LBA Format #04 00:10:19.129 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.129 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.129 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.129 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.129 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.129 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.129 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.129 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.129 00:10:19.129 20:56:40 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:19.129 20:56:40 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:10:19.404 ===================================================== 00:10:19.404 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:19.404 ===================================================== 00:10:19.404 Controller Capabilities/Features 00:10:19.404 ================================ 00:10:19.404 Vendor ID: 1b36 00:10:19.404 Subsystem Vendor ID: 1af4 00:10:19.404 Serial Number: 12340 00:10:19.404 Model Number: QEMU NVMe Ctrl 00:10:19.404 Firmware Version: 8.0.0 00:10:19.404 Recommended Arb Burst: 6 00:10:19.404 IEEE OUI Identifier: 00 54 52 00:10:19.404 Multi-path I/O 00:10:19.404 May have multiple subsystem ports: No 00:10:19.404 May have multiple controllers: No 00:10:19.404 Associated with SR-IOV VF: No 00:10:19.404 Max Data Transfer Size: 524288 00:10:19.404 Max Number of Namespaces: 256 00:10:19.404 Max Number of I/O Queues: 64 00:10:19.404 NVMe Specification Version (VS): 1.4 00:10:19.404 NVMe Specification Version (Identify): 1.4 00:10:19.404 Maximum Queue Entries: 2048 00:10:19.404 Contiguous Queues Required: Yes 00:10:19.404 Arbitration Mechanisms Supported 00:10:19.404 Weighted Round Robin: Not Supported 00:10:19.404 Vendor Specific: Not Supported 00:10:19.404 Reset Timeout: 7500 ms 00:10:19.404 Doorbell Stride: 4 bytes 00:10:19.404 NVM Subsystem Reset: Not Supported 00:10:19.404 Command Sets Supported 00:10:19.404 NVM Command Set: Supported 00:10:19.404 Boot Partition: Not Supported 00:10:19.404 Memory Page Size Minimum: 4096 bytes 00:10:19.404 Memory Page Size Maximum: 65536 bytes 00:10:19.404 Persistent Memory Region: Not Supported 00:10:19.404 Optional Asynchronous Events Supported 00:10:19.404 Namespace Attribute Notices: Supported 00:10:19.404 Firmware Activation Notices: Not Supported 00:10:19.404 ANA Change Notices: Not Supported 00:10:19.404 PLE Aggregate Log Change Notices: Not Supported 00:10:19.404 LBA Status Info Alert Notices: Not Supported 00:10:19.404 EGE Aggregate Log Change Notices: Not Supported 00:10:19.404 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.404 Zone Descriptor Change Notices: Not Supported 00:10:19.404 Discovery Log Change Notices: Not Supported 00:10:19.404 Controller Attributes 00:10:19.404 128-bit Host Identifier: Not Supported 00:10:19.404 Non-Operational Permissive Mode: Not Supported 00:10:19.404 NVM Sets: Not Supported 00:10:19.404 Read Recovery Levels: Not Supported 00:10:19.404 Endurance Groups: Not Supported 00:10:19.404 Predictable Latency Mode: Not Supported 00:10:19.404 Traffic Based Keep ALive: Not Supported 00:10:19.404 Namespace Granularity: Not Supported 00:10:19.404 SQ Associations: Not Supported 00:10:19.404 UUID List: Not Supported 00:10:19.404 Multi-Domain Subsystem: Not Supported 00:10:19.404 Fixed Capacity Management: Not Supported 00:10:19.404 Variable Capacity Management: Not Supported 00:10:19.404 Delete Endurance Group: Not Supported 00:10:19.404 Delete NVM Set: Not Supported 00:10:19.404 Extended LBA Formats Supported: Supported 00:10:19.404 Flexible Data Placement Supported: Not Supported 00:10:19.404 00:10:19.404 Controller Memory Buffer Support 00:10:19.404 ================================ 00:10:19.404 Supported: No 00:10:19.404 00:10:19.404 Persistent Memory Region Support 00:10:19.404 ================================ 00:10:19.404 Supported: No 00:10:19.404 00:10:19.404 Admin Command Set Attributes 00:10:19.404 ============================ 00:10:19.404 Security Send/Receive: Not Supported 00:10:19.404 Format NVM: Supported 00:10:19.404 Firmware Activate/Download: Not Supported 00:10:19.404 Namespace Management: Supported 00:10:19.404 Device Self-Test: Not Supported 00:10:19.404 Directives: Supported 00:10:19.404 NVMe-MI: Not Supported 00:10:19.404 Virtualization Management: Not Supported 00:10:19.404 Doorbell Buffer Config: Supported 00:10:19.404 Get LBA Status Capability: Not Supported 00:10:19.404 Command & Feature Lockdown Capability: Not Supported 00:10:19.404 Abort Command Limit: 4 00:10:19.404 Async Event Request Limit: 4 00:10:19.404 Number of Firmware Slots: N/A 00:10:19.404 Firmware Slot 1 Read-Only: N/A 00:10:19.404 Firmware Activation Without Reset: N/A 00:10:19.404 Multiple Update Detection Support: N/A 00:10:19.404 Firmware Update Granularity: No Information Provided 00:10:19.404 Per-Namespace SMART Log: Yes 00:10:19.404 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.404 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:19.404 Command Effects Log Page: Supported 00:10:19.404 Get Log Page Extended Data: Supported 00:10:19.404 Telemetry Log Pages: Not Supported 00:10:19.404 Persistent Event Log Pages: Not Supported 00:10:19.404 Supported Log Pages Log Page: May Support 00:10:19.404 Commands Supported & Effects Log Page: Not Supported 00:10:19.404 Feature Identifiers & Effects Log Page:May Support 00:10:19.404 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.404 Data Area 4 for Telemetry Log: Not Supported 00:10:19.404 Error Log Page Entries Supported: 1 00:10:19.404 Keep Alive: Not Supported 00:10:19.404 00:10:19.404 NVM Command Set Attributes 00:10:19.404 ========================== 00:10:19.404 Submission Queue Entry Size 00:10:19.404 Max: 64 00:10:19.404 Min: 64 00:10:19.404 Completion Queue Entry Size 00:10:19.404 Max: 16 00:10:19.404 Min: 16 00:10:19.404 Number of Namespaces: 256 00:10:19.404 Compare Command: Supported 00:10:19.404 Write Uncorrectable Command: Not Supported 00:10:19.404 Dataset Management Command: Supported 00:10:19.404 Write Zeroes Command: Supported 00:10:19.404 Set Features Save Field: Supported 00:10:19.404 Reservations: Not Supported 00:10:19.404 Timestamp: Supported 00:10:19.404 Copy: Supported 00:10:19.404 Volatile Write Cache: Present 00:10:19.404 Atomic Write Unit (Normal): 1 00:10:19.404 Atomic Write Unit (PFail): 1 00:10:19.404 Atomic Compare & Write Unit: 1 00:10:19.404 Fused Compare & Write: Not Supported 00:10:19.404 Scatter-Gather List 00:10:19.404 SGL Command Set: Supported 00:10:19.404 SGL Keyed: Not Supported 00:10:19.404 SGL Bit Bucket Descriptor: Not Supported 00:10:19.404 SGL Metadata Pointer: Not Supported 00:10:19.404 Oversized SGL: Not Supported 00:10:19.404 SGL Metadata Address: Not Supported 00:10:19.404 SGL Offset: Not Supported 00:10:19.404 Transport SGL Data Block: Not Supported 00:10:19.404 Replay Protected Memory Block: Not Supported 00:10:19.404 00:10:19.404 Firmware Slot Information 00:10:19.404 ========================= 00:10:19.404 Active slot: 1 00:10:19.404 Slot 1 Firmware Revision: 1.0 00:10:19.404 00:10:19.404 00:10:19.404 Commands Supported and Effects 00:10:19.404 ============================== 00:10:19.404 Admin Commands 00:10:19.404 -------------- 00:10:19.404 Delete I/O Submission Queue (00h): Supported 00:10:19.404 Create I/O Submission Queue (01h): Supported 00:10:19.405 Get Log Page (02h): Supported 00:10:19.405 Delete I/O Completion Queue (04h): Supported 00:10:19.405 Create I/O Completion Queue (05h): Supported 00:10:19.405 Identify (06h): Supported 00:10:19.405 Abort (08h): Supported 00:10:19.405 Set Features (09h): Supported 00:10:19.405 Get Features (0Ah): Supported 00:10:19.405 Asynchronous Event Request (0Ch): Supported 00:10:19.405 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.405 Directive Send (19h): Supported 00:10:19.405 Directive Receive (1Ah): Supported 00:10:19.405 Virtualization Management (1Ch): Supported 00:10:19.405 Doorbell Buffer Config (7Ch): Supported 00:10:19.405 Format NVM (80h): Supported LBA-Change 00:10:19.405 I/O Commands 00:10:19.405 ------------ 00:10:19.405 Flush (00h): Supported LBA-Change 00:10:19.405 Write (01h): Supported LBA-Change 00:10:19.405 Read (02h): Supported 00:10:19.405 Compare (05h): Supported 00:10:19.405 Write Zeroes (08h): Supported LBA-Change 00:10:19.405 Dataset Management (09h): Supported LBA-Change 00:10:19.405 Unknown (0Ch): Supported 00:10:19.405 Unknown (12h): Supported 00:10:19.405 Copy (19h): Supported LBA-Change 00:10:19.405 Unknown (1Dh): Supported LBA-Change 00:10:19.405 00:10:19.405 Error Log 00:10:19.405 ========= 00:10:19.405 00:10:19.405 Arbitration 00:10:19.405 =========== 00:10:19.405 Arbitration Burst: no limit 00:10:19.405 00:10:19.405 Power Management 00:10:19.405 ================ 00:10:19.405 Number of Power States: 1 00:10:19.405 Current Power State: Power State #0 00:10:19.405 Power State #0: 00:10:19.405 Max Power: 25.00 W 00:10:19.405 Non-Operational State: Operational 00:10:19.405 Entry Latency: 16 microseconds 00:10:19.405 Exit Latency: 4 microseconds 00:10:19.405 Relative Read Throughput: 0 00:10:19.405 Relative Read Latency: 0 00:10:19.405 Relative Write Throughput: 0 00:10:19.405 Relative Write Latency: 0 00:10:19.405 Idle Power: Not Reported 00:10:19.405 Active Power: Not Reported 00:10:19.405 Non-Operational Permissive Mode: Not Supported 00:10:19.405 00:10:19.405 Health Information 00:10:19.405 ================== 00:10:19.405 Critical Warnings: 00:10:19.405 Available Spare Space: OK 00:10:19.405 Temperature: OK 00:10:19.405 Device Reliability: OK 00:10:19.405 Read Only: No 00:10:19.405 Volatile Memory Backup: OK 00:10:19.405 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.405 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.405 Available Spare: 0% 00:10:19.405 Available Spare Threshold: 0% 00:10:19.405 Life Percentage Used: 0% 00:10:19.405 Data Units Read: 1909 00:10:19.405 Data Units Written: 875 00:10:19.405 Host Read Commands: 86844 00:10:19.405 Host Write Commands: 42994 00:10:19.405 Controller Busy Time: 0 minutes 00:10:19.405 Power Cycles: 0 00:10:19.405 Power On Hours: 0 hours 00:10:19.405 Unsafe Shutdowns: 0 00:10:19.405 Unrecoverable Media Errors: 0 00:10:19.405 Lifetime Error Log Entries: 0 00:10:19.405 Warning Temperature Time: 0 minutes 00:10:19.405 Critical Temperature Time: 0 minutes 00:10:19.405 00:10:19.405 Number of Queues 00:10:19.405 ================ 00:10:19.405 Number of I/O Submission Queues: 64 00:10:19.405 Number of I/O Completion Queues: 64 00:10:19.405 00:10:19.405 ZNS Specific Controller Data 00:10:19.405 ============================ 00:10:19.405 Zone Append Size Limit: 0 00:10:19.405 00:10:19.405 00:10:19.405 Active Namespaces 00:10:19.405 ================= 00:10:19.405 Namespace ID:1 00:10:19.405 Error Recovery Timeout: Unlimited 00:10:19.405 Command Set Identifier: NVM (00h) 00:10:19.405 Deallocate: Supported 00:10:19.405 Deallocated/Unwritten Error: Supported 00:10:19.405 Deallocated Read Value: All 0x00 00:10:19.405 Deallocate in Write Zeroes: Not Supported 00:10:19.405 Deallocated Guard Field: 0xFFFF 00:10:19.405 Flush: Supported 00:10:19.405 Reservation: Not Supported 00:10:19.405 Metadata Transferred as: Separate Metadata Buffer 00:10:19.405 Namespace Sharing Capabilities: Private 00:10:19.405 Size (in LBAs): 1548666 (5GiB) 00:10:19.405 Capacity (in LBAs): 1548666 (5GiB) 00:10:19.405 Utilization (in LBAs): 1548666 (5GiB) 00:10:19.405 Thin Provisioning: Not Supported 00:10:19.405 Per-NS Atomic Units: No 00:10:19.405 Maximum Single Source Range Length: 128 00:10:19.405 Maximum Copy Length: 128 00:10:19.405 Maximum Source Range Count: 128 00:10:19.405 NGUID/EUI64 Never Reused: No 00:10:19.405 Namespace Write Protected: No 00:10:19.405 Number of LBA Formats: 8 00:10:19.405 Current LBA Format: LBA Format #07 00:10:19.405 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.405 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.405 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.405 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.405 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.405 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.405 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.405 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.405 00:10:19.405 20:56:40 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:19.405 20:56:40 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:10:19.664 ===================================================== 00:10:19.664 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:19.664 ===================================================== 00:10:19.664 Controller Capabilities/Features 00:10:19.664 ================================ 00:10:19.664 Vendor ID: 1b36 00:10:19.664 Subsystem Vendor ID: 1af4 00:10:19.664 Serial Number: 12341 00:10:19.664 Model Number: QEMU NVMe Ctrl 00:10:19.664 Firmware Version: 8.0.0 00:10:19.664 Recommended Arb Burst: 6 00:10:19.664 IEEE OUI Identifier: 00 54 52 00:10:19.664 Multi-path I/O 00:10:19.664 May have multiple subsystem ports: No 00:10:19.664 May have multiple controllers: No 00:10:19.664 Associated with SR-IOV VF: No 00:10:19.664 Max Data Transfer Size: 524288 00:10:19.664 Max Number of Namespaces: 256 00:10:19.664 Max Number of I/O Queues: 64 00:10:19.664 NVMe Specification Version (VS): 1.4 00:10:19.664 NVMe Specification Version (Identify): 1.4 00:10:19.664 Maximum Queue Entries: 2048 00:10:19.664 Contiguous Queues Required: Yes 00:10:19.664 Arbitration Mechanisms Supported 00:10:19.664 Weighted Round Robin: Not Supported 00:10:19.664 Vendor Specific: Not Supported 00:10:19.664 Reset Timeout: 7500 ms 00:10:19.664 Doorbell Stride: 4 bytes 00:10:19.664 NVM Subsystem Reset: Not Supported 00:10:19.664 Command Sets Supported 00:10:19.664 NVM Command Set: Supported 00:10:19.664 Boot Partition: Not Supported 00:10:19.664 Memory Page Size Minimum: 4096 bytes 00:10:19.664 Memory Page Size Maximum: 65536 bytes 00:10:19.664 Persistent Memory Region: Not Supported 00:10:19.664 Optional Asynchronous Events Supported 00:10:19.664 Namespace Attribute Notices: Supported 00:10:19.664 Firmware Activation Notices: Not Supported 00:10:19.664 ANA Change Notices: Not Supported 00:10:19.664 PLE Aggregate Log Change Notices: Not Supported 00:10:19.664 LBA Status Info Alert Notices: Not Supported 00:10:19.664 EGE Aggregate Log Change Notices: Not Supported 00:10:19.664 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.664 Zone Descriptor Change Notices: Not Supported 00:10:19.665 Discovery Log Change Notices: Not Supported 00:10:19.665 Controller Attributes 00:10:19.665 128-bit Host Identifier: Not Supported 00:10:19.665 Non-Operational Permissive Mode: Not Supported 00:10:19.665 NVM Sets: Not Supported 00:10:19.665 Read Recovery Levels: Not Supported 00:10:19.665 Endurance Groups: Not Supported 00:10:19.665 Predictable Latency Mode: Not Supported 00:10:19.665 Traffic Based Keep ALive: Not Supported 00:10:19.665 Namespace Granularity: Not Supported 00:10:19.665 SQ Associations: Not Supported 00:10:19.665 UUID List: Not Supported 00:10:19.665 Multi-Domain Subsystem: Not Supported 00:10:19.665 Fixed Capacity Management: Not Supported 00:10:19.665 Variable Capacity Management: Not Supported 00:10:19.665 Delete Endurance Group: Not Supported 00:10:19.665 Delete NVM Set: Not Supported 00:10:19.665 Extended LBA Formats Supported: Supported 00:10:19.665 Flexible Data Placement Supported: Not Supported 00:10:19.665 00:10:19.665 Controller Memory Buffer Support 00:10:19.665 ================================ 00:10:19.665 Supported: No 00:10:19.665 00:10:19.665 Persistent Memory Region Support 00:10:19.665 ================================ 00:10:19.665 Supported: No 00:10:19.665 00:10:19.665 Admin Command Set Attributes 00:10:19.665 ============================ 00:10:19.665 Security Send/Receive: Not Supported 00:10:19.665 Format NVM: Supported 00:10:19.665 Firmware Activate/Download: Not Supported 00:10:19.665 Namespace Management: Supported 00:10:19.665 Device Self-Test: Not Supported 00:10:19.665 Directives: Supported 00:10:19.665 NVMe-MI: Not Supported 00:10:19.665 Virtualization Management: Not Supported 00:10:19.665 Doorbell Buffer Config: Supported 00:10:19.665 Get LBA Status Capability: Not Supported 00:10:19.665 Command & Feature Lockdown Capability: Not Supported 00:10:19.665 Abort Command Limit: 4 00:10:19.665 Async Event Request Limit: 4 00:10:19.665 Number of Firmware Slots: N/A 00:10:19.665 Firmware Slot 1 Read-Only: N/A 00:10:19.665 Firmware Activation Without Reset: N/A 00:10:19.665 Multiple Update Detection Support: N/A 00:10:19.665 Firmware Update Granularity: No Information Provided 00:10:19.665 Per-Namespace SMART Log: Yes 00:10:19.665 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.665 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:19.665 Command Effects Log Page: Supported 00:10:19.665 Get Log Page Extended Data: Supported 00:10:19.665 Telemetry Log Pages: Not Supported 00:10:19.665 Persistent Event Log Pages: Not Supported 00:10:19.665 Supported Log Pages Log Page: May Support 00:10:19.665 Commands Supported & Effects Log Page: Not Supported 00:10:19.665 Feature Identifiers & Effects Log Page:May Support 00:10:19.665 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.665 Data Area 4 for Telemetry Log: Not Supported 00:10:19.665 Error Log Page Entries Supported: 1 00:10:19.665 Keep Alive: Not Supported 00:10:19.665 00:10:19.665 NVM Command Set Attributes 00:10:19.665 ========================== 00:10:19.665 Submission Queue Entry Size 00:10:19.665 Max: 64 00:10:19.665 Min: 64 00:10:19.665 Completion Queue Entry Size 00:10:19.665 Max: 16 00:10:19.665 Min: 16 00:10:19.665 Number of Namespaces: 256 00:10:19.665 Compare Command: Supported 00:10:19.665 Write Uncorrectable Command: Not Supported 00:10:19.665 Dataset Management Command: Supported 00:10:19.665 Write Zeroes Command: Supported 00:10:19.665 Set Features Save Field: Supported 00:10:19.665 Reservations: Not Supported 00:10:19.665 Timestamp: Supported 00:10:19.665 Copy: Supported 00:10:19.665 Volatile Write Cache: Present 00:10:19.665 Atomic Write Unit (Normal): 1 00:10:19.665 Atomic Write Unit (PFail): 1 00:10:19.665 Atomic Compare & Write Unit: 1 00:10:19.665 Fused Compare & Write: Not Supported 00:10:19.665 Scatter-Gather List 00:10:19.665 SGL Command Set: Supported 00:10:19.665 SGL Keyed: Not Supported 00:10:19.665 SGL Bit Bucket Descriptor: Not Supported 00:10:19.665 SGL Metadata Pointer: Not Supported 00:10:19.665 Oversized SGL: Not Supported 00:10:19.665 SGL Metadata Address: Not Supported 00:10:19.665 SGL Offset: Not Supported 00:10:19.665 Transport SGL Data Block: Not Supported 00:10:19.665 Replay Protected Memory Block: Not Supported 00:10:19.665 00:10:19.665 Firmware Slot Information 00:10:19.665 ========================= 00:10:19.665 Active slot: 1 00:10:19.665 Slot 1 Firmware Revision: 1.0 00:10:19.665 00:10:19.665 00:10:19.665 Commands Supported and Effects 00:10:19.665 ============================== 00:10:19.665 Admin Commands 00:10:19.665 -------------- 00:10:19.665 Delete I/O Submission Queue (00h): Supported 00:10:19.665 Create I/O Submission Queue (01h): Supported 00:10:19.665 Get Log Page (02h): Supported 00:10:19.665 Delete I/O Completion Queue (04h): Supported 00:10:19.665 Create I/O Completion Queue (05h): Supported 00:10:19.665 Identify (06h): Supported 00:10:19.665 Abort (08h): Supported 00:10:19.665 Set Features (09h): Supported 00:10:19.665 Get Features (0Ah): Supported 00:10:19.665 Asynchronous Event Request (0Ch): Supported 00:10:19.665 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.665 Directive Send (19h): Supported 00:10:19.665 Directive Receive (1Ah): Supported 00:10:19.665 Virtualization Management (1Ch): Supported 00:10:19.665 Doorbell Buffer Config (7Ch): Supported 00:10:19.665 Format NVM (80h): Supported LBA-Change 00:10:19.665 I/O Commands 00:10:19.665 ------------ 00:10:19.665 Flush (00h): Supported LBA-Change 00:10:19.665 Write (01h): Supported LBA-Change 00:10:19.665 Read (02h): Supported 00:10:19.665 Compare (05h): Supported 00:10:19.665 Write Zeroes (08h): Supported LBA-Change 00:10:19.665 Dataset Management (09h): Supported LBA-Change 00:10:19.665 Unknown (0Ch): Supported 00:10:19.665 Unknown (12h): Supported 00:10:19.665 Copy (19h): Supported LBA-Change 00:10:19.665 Unknown (1Dh): Supported LBA-Change 00:10:19.665 00:10:19.665 Error Log 00:10:19.665 ========= 00:10:19.665 00:10:19.665 Arbitration 00:10:19.665 =========== 00:10:19.665 Arbitration Burst: no limit 00:10:19.665 00:10:19.665 Power Management 00:10:19.665 ================ 00:10:19.665 Number of Power States: 1 00:10:19.665 Current Power State: Power State #0 00:10:19.665 Power State #0: 00:10:19.665 Max Power: 25.00 W 00:10:19.665 Non-Operational State: Operational 00:10:19.665 Entry Latency: 16 microseconds 00:10:19.665 Exit Latency: 4 microseconds 00:10:19.665 Relative Read Throughput: 0 00:10:19.665 Relative Read Latency: 0 00:10:19.665 Relative Write Throughput: 0 00:10:19.665 Relative Write Latency: 0 00:10:19.925 Idle Power: Not Reported 00:10:19.925 Active Power: Not Reported 00:10:19.925 Non-Operational Permissive Mode: Not Supported 00:10:19.925 00:10:19.925 Health Information 00:10:19.925 ================== 00:10:19.925 Critical Warnings: 00:10:19.925 Available Spare Space: OK 00:10:19.925 Temperature: OK 00:10:19.925 Device Reliability: OK 00:10:19.925 Read Only: No 00:10:19.925 Volatile Memory Backup: OK 00:10:19.925 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.925 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.925 Available Spare: 0% 00:10:19.925 Available Spare Threshold: 0% 00:10:19.925 Life Percentage Used: 0% 00:10:19.925 Data Units Read: 1289 00:10:19.925 Data Units Written: 595 00:10:19.925 Host Read Commands: 59879 00:10:19.925 Host Write Commands: 29399 00:10:19.925 Controller Busy Time: 0 minutes 00:10:19.925 Power Cycles: 0 00:10:19.925 Power On Hours: 0 hours 00:10:19.925 Unsafe Shutdowns: 0 00:10:19.925 Unrecoverable Media Errors: 0 00:10:19.925 Lifetime Error Log Entries: 0 00:10:19.925 Warning Temperature Time: 0 minutes 00:10:19.925 Critical Temperature Time: 0 minutes 00:10:19.925 00:10:19.925 Number of Queues 00:10:19.925 ================ 00:10:19.925 Number of I/O Submission Queues: 64 00:10:19.925 Number of I/O Completion Queues: 64 00:10:19.925 00:10:19.925 ZNS Specific Controller Data 00:10:19.925 ============================ 00:10:19.925 Zone Append Size Limit: 0 00:10:19.925 00:10:19.925 00:10:19.925 Active Namespaces 00:10:19.925 ================= 00:10:19.925 Namespace ID:1 00:10:19.925 Error Recovery Timeout: Unlimited 00:10:19.925 Command Set Identifier: NVM (00h) 00:10:19.925 Deallocate: Supported 00:10:19.925 Deallocated/Unwritten Error: Supported 00:10:19.925 Deallocated Read Value: All 0x00 00:10:19.925 Deallocate in Write Zeroes: Not Supported 00:10:19.925 Deallocated Guard Field: 0xFFFF 00:10:19.925 Flush: Supported 00:10:19.925 Reservation: Not Supported 00:10:19.925 Namespace Sharing Capabilities: Private 00:10:19.925 Size (in LBAs): 1310720 (5GiB) 00:10:19.925 Capacity (in LBAs): 1310720 (5GiB) 00:10:19.925 Utilization (in LBAs): 1310720 (5GiB) 00:10:19.925 Thin Provisioning: Not Supported 00:10:19.925 Per-NS Atomic Units: No 00:10:19.925 Maximum Single Source Range Length: 128 00:10:19.925 Maximum Copy Length: 128 00:10:19.925 Maximum Source Range Count: 128 00:10:19.925 NGUID/EUI64 Never Reused: No 00:10:19.925 Namespace Write Protected: No 00:10:19.925 Number of LBA Formats: 8 00:10:19.925 Current LBA Format: LBA Format #04 00:10:19.925 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.925 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.925 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.925 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.925 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.925 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.925 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.925 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.925 00:10:19.925 20:56:40 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:19.925 20:56:40 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:10:20.185 ===================================================== 00:10:20.185 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:20.185 ===================================================== 00:10:20.185 Controller Capabilities/Features 00:10:20.185 ================================ 00:10:20.185 Vendor ID: 1b36 00:10:20.185 Subsystem Vendor ID: 1af4 00:10:20.185 Serial Number: 12342 00:10:20.185 Model Number: QEMU NVMe Ctrl 00:10:20.185 Firmware Version: 8.0.0 00:10:20.185 Recommended Arb Burst: 6 00:10:20.185 IEEE OUI Identifier: 00 54 52 00:10:20.185 Multi-path I/O 00:10:20.185 May have multiple subsystem ports: No 00:10:20.185 May have multiple controllers: No 00:10:20.185 Associated with SR-IOV VF: No 00:10:20.185 Max Data Transfer Size: 524288 00:10:20.185 Max Number of Namespaces: 256 00:10:20.185 Max Number of I/O Queues: 64 00:10:20.185 NVMe Specification Version (VS): 1.4 00:10:20.185 NVMe Specification Version (Identify): 1.4 00:10:20.185 Maximum Queue Entries: 2048 00:10:20.185 Contiguous Queues Required: Yes 00:10:20.185 Arbitration Mechanisms Supported 00:10:20.185 Weighted Round Robin: Not Supported 00:10:20.185 Vendor Specific: Not Supported 00:10:20.185 Reset Timeout: 7500 ms 00:10:20.185 Doorbell Stride: 4 bytes 00:10:20.185 NVM Subsystem Reset: Not Supported 00:10:20.185 Command Sets Supported 00:10:20.185 NVM Command Set: Supported 00:10:20.185 Boot Partition: Not Supported 00:10:20.185 Memory Page Size Minimum: 4096 bytes 00:10:20.185 Memory Page Size Maximum: 65536 bytes 00:10:20.185 Persistent Memory Region: Not Supported 00:10:20.185 Optional Asynchronous Events Supported 00:10:20.185 Namespace Attribute Notices: Supported 00:10:20.185 Firmware Activation Notices: Not Supported 00:10:20.185 ANA Change Notices: Not Supported 00:10:20.185 PLE Aggregate Log Change Notices: Not Supported 00:10:20.185 LBA Status Info Alert Notices: Not Supported 00:10:20.185 EGE Aggregate Log Change Notices: Not Supported 00:10:20.185 Normal NVM Subsystem Shutdown event: Not Supported 00:10:20.185 Zone Descriptor Change Notices: Not Supported 00:10:20.185 Discovery Log Change Notices: Not Supported 00:10:20.185 Controller Attributes 00:10:20.185 128-bit Host Identifier: Not Supported 00:10:20.185 Non-Operational Permissive Mode: Not Supported 00:10:20.185 NVM Sets: Not Supported 00:10:20.185 Read Recovery Levels: Not Supported 00:10:20.185 Endurance Groups: Not Supported 00:10:20.185 Predictable Latency Mode: Not Supported 00:10:20.185 Traffic Based Keep ALive: Not Supported 00:10:20.185 Namespace Granularity: Not Supported 00:10:20.185 SQ Associations: Not Supported 00:10:20.185 UUID List: Not Supported 00:10:20.185 Multi-Domain Subsystem: Not Supported 00:10:20.185 Fixed Capacity Management: Not Supported 00:10:20.185 Variable Capacity Management: Not Supported 00:10:20.185 Delete Endurance Group: Not Supported 00:10:20.185 Delete NVM Set: Not Supported 00:10:20.185 Extended LBA Formats Supported: Supported 00:10:20.185 Flexible Data Placement Supported: Not Supported 00:10:20.185 00:10:20.185 Controller Memory Buffer Support 00:10:20.185 ================================ 00:10:20.185 Supported: No 00:10:20.185 00:10:20.185 Persistent Memory Region Support 00:10:20.185 ================================ 00:10:20.185 Supported: No 00:10:20.185 00:10:20.185 Admin Command Set Attributes 00:10:20.185 ============================ 00:10:20.185 Security Send/Receive: Not Supported 00:10:20.185 Format NVM: Supported 00:10:20.185 Firmware Activate/Download: Not Supported 00:10:20.185 Namespace Management: Supported 00:10:20.185 Device Self-Test: Not Supported 00:10:20.185 Directives: Supported 00:10:20.185 NVMe-MI: Not Supported 00:10:20.185 Virtualization Management: Not Supported 00:10:20.185 Doorbell Buffer Config: Supported 00:10:20.185 Get LBA Status Capability: Not Supported 00:10:20.185 Command & Feature Lockdown Capability: Not Supported 00:10:20.186 Abort Command Limit: 4 00:10:20.186 Async Event Request Limit: 4 00:10:20.186 Number of Firmware Slots: N/A 00:10:20.186 Firmware Slot 1 Read-Only: N/A 00:10:20.186 Firmware Activation Without Reset: N/A 00:10:20.186 Multiple Update Detection Support: N/A 00:10:20.186 Firmware Update Granularity: No Information Provided 00:10:20.186 Per-Namespace SMART Log: Yes 00:10:20.186 Asymmetric Namespace Access Log Page: Not Supported 00:10:20.186 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:20.186 Command Effects Log Page: Supported 00:10:20.186 Get Log Page Extended Data: Supported 00:10:20.186 Telemetry Log Pages: Not Supported 00:10:20.186 Persistent Event Log Pages: Not Supported 00:10:20.186 Supported Log Pages Log Page: May Support 00:10:20.186 Commands Supported & Effects Log Page: Not Supported 00:10:20.186 Feature Identifiers & Effects Log Page:May Support 00:10:20.186 NVMe-MI Commands & Effects Log Page: May Support 00:10:20.186 Data Area 4 for Telemetry Log: Not Supported 00:10:20.186 Error Log Page Entries Supported: 1 00:10:20.186 Keep Alive: Not Supported 00:10:20.186 00:10:20.186 NVM Command Set Attributes 00:10:20.186 ========================== 00:10:20.186 Submission Queue Entry Size 00:10:20.186 Max: 64 00:10:20.186 Min: 64 00:10:20.186 Completion Queue Entry Size 00:10:20.186 Max: 16 00:10:20.186 Min: 16 00:10:20.186 Number of Namespaces: 256 00:10:20.186 Compare Command: Supported 00:10:20.186 Write Uncorrectable Command: Not Supported 00:10:20.186 Dataset Management Command: Supported 00:10:20.186 Write Zeroes Command: Supported 00:10:20.186 Set Features Save Field: Supported 00:10:20.186 Reservations: Not Supported 00:10:20.186 Timestamp: Supported 00:10:20.186 Copy: Supported 00:10:20.186 Volatile Write Cache: Present 00:10:20.186 Atomic Write Unit (Normal): 1 00:10:20.186 Atomic Write Unit (PFail): 1 00:10:20.186 Atomic Compare & Write Unit: 1 00:10:20.186 Fused Compare & Write: Not Supported 00:10:20.186 Scatter-Gather List 00:10:20.186 SGL Command Set: Supported 00:10:20.186 SGL Keyed: Not Supported 00:10:20.186 SGL Bit Bucket Descriptor: Not Supported 00:10:20.186 SGL Metadata Pointer: Not Supported 00:10:20.186 Oversized SGL: Not Supported 00:10:20.186 SGL Metadata Address: Not Supported 00:10:20.186 SGL Offset: Not Supported 00:10:20.186 Transport SGL Data Block: Not Supported 00:10:20.186 Replay Protected Memory Block: Not Supported 00:10:20.186 00:10:20.186 Firmware Slot Information 00:10:20.186 ========================= 00:10:20.186 Active slot: 1 00:10:20.186 Slot 1 Firmware Revision: 1.0 00:10:20.186 00:10:20.186 00:10:20.186 Commands Supported and Effects 00:10:20.186 ============================== 00:10:20.186 Admin Commands 00:10:20.186 -------------- 00:10:20.186 Delete I/O Submission Queue (00h): Supported 00:10:20.186 Create I/O Submission Queue (01h): Supported 00:10:20.186 Get Log Page (02h): Supported 00:10:20.186 Delete I/O Completion Queue (04h): Supported 00:10:20.186 Create I/O Completion Queue (05h): Supported 00:10:20.186 Identify (06h): Supported 00:10:20.186 Abort (08h): Supported 00:10:20.186 Set Features (09h): Supported 00:10:20.186 Get Features (0Ah): Supported 00:10:20.186 Asynchronous Event Request (0Ch): Supported 00:10:20.186 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:20.186 Directive Send (19h): Supported 00:10:20.186 Directive Receive (1Ah): Supported 00:10:20.186 Virtualization Management (1Ch): Supported 00:10:20.186 Doorbell Buffer Config (7Ch): Supported 00:10:20.186 Format NVM (80h): Supported LBA-Change 00:10:20.186 I/O Commands 00:10:20.186 ------------ 00:10:20.186 Flush (00h): Supported LBA-Change 00:10:20.186 Write (01h): Supported LBA-Change 00:10:20.186 Read (02h): Supported 00:10:20.186 Compare (05h): Supported 00:10:20.186 Write Zeroes (08h): Supported LBA-Change 00:10:20.186 Dataset Management (09h): Supported LBA-Change 00:10:20.186 Unknown (0Ch): Supported 00:10:20.186 Unknown (12h): Supported 00:10:20.186 Copy (19h): Supported LBA-Change 00:10:20.186 Unknown (1Dh): Supported LBA-Change 00:10:20.186 00:10:20.186 Error Log 00:10:20.186 ========= 00:10:20.186 00:10:20.186 Arbitration 00:10:20.186 =========== 00:10:20.186 Arbitration Burst: no limit 00:10:20.186 00:10:20.186 Power Management 00:10:20.186 ================ 00:10:20.186 Number of Power States: 1 00:10:20.186 Current Power State: Power State #0 00:10:20.186 Power State #0: 00:10:20.186 Max Power: 25.00 W 00:10:20.186 Non-Operational State: Operational 00:10:20.186 Entry Latency: 16 microseconds 00:10:20.186 Exit Latency: 4 microseconds 00:10:20.186 Relative Read Throughput: 0 00:10:20.186 Relative Read Latency: 0 00:10:20.186 Relative Write Throughput: 0 00:10:20.186 Relative Write Latency: 0 00:10:20.186 Idle Power: Not Reported 00:10:20.186 Active Power: Not Reported 00:10:20.186 Non-Operational Permissive Mode: Not Supported 00:10:20.186 00:10:20.186 Health Information 00:10:20.186 ================== 00:10:20.186 Critical Warnings: 00:10:20.186 Available Spare Space: OK 00:10:20.186 Temperature: OK 00:10:20.186 Device Reliability: OK 00:10:20.186 Read Only: No 00:10:20.186 Volatile Memory Backup: OK 00:10:20.186 Current Temperature: 323 Kelvin (50 Celsius) 00:10:20.186 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:20.186 Available Spare: 0% 00:10:20.186 Available Spare Threshold: 0% 00:10:20.186 Life Percentage Used: 0% 00:10:20.186 Data Units Read: 3948 00:10:20.186 Data Units Written: 1816 00:10:20.186 Host Read Commands: 180806 00:10:20.186 Host Write Commands: 88578 00:10:20.186 Controller Busy Time: 0 minutes 00:10:20.186 Power Cycles: 0 00:10:20.187 Power On Hours: 0 hours 00:10:20.187 Unsafe Shutdowns: 0 00:10:20.187 Unrecoverable Media Errors: 0 00:10:20.187 Lifetime Error Log Entries: 0 00:10:20.187 Warning Temperature Time: 0 minutes 00:10:20.187 Critical Temperature Time: 0 minutes 00:10:20.187 00:10:20.187 Number of Queues 00:10:20.187 ================ 00:10:20.187 Number of I/O Submission Queues: 64 00:10:20.187 Number of I/O Completion Queues: 64 00:10:20.187 00:10:20.187 ZNS Specific Controller Data 00:10:20.187 ============================ 00:10:20.187 Zone Append Size Limit: 0 00:10:20.187 00:10:20.187 00:10:20.187 Active Namespaces 00:10:20.187 ================= 00:10:20.187 Namespace ID:1 00:10:20.187 Error Recovery Timeout: Unlimited 00:10:20.187 Command Set Identifier: NVM (00h) 00:10:20.187 Deallocate: Supported 00:10:20.187 Deallocated/Unwritten Error: Supported 00:10:20.187 Deallocated Read Value: All 0x00 00:10:20.187 Deallocate in Write Zeroes: Not Supported 00:10:20.187 Deallocated Guard Field: 0xFFFF 00:10:20.187 Flush: Supported 00:10:20.187 Reservation: Not Supported 00:10:20.187 Namespace Sharing Capabilities: Private 00:10:20.187 Size (in LBAs): 1048576 (4GiB) 00:10:20.187 Capacity (in LBAs): 1048576 (4GiB) 00:10:20.187 Utilization (in LBAs): 1048576 (4GiB) 00:10:20.187 Thin Provisioning: Not Supported 00:10:20.187 Per-NS Atomic Units: No 00:10:20.187 Maximum Single Source Range Length: 128 00:10:20.187 Maximum Copy Length: 128 00:10:20.187 Maximum Source Range Count: 128 00:10:20.187 NGUID/EUI64 Never Reused: No 00:10:20.187 Namespace Write Protected: No 00:10:20.187 Number of LBA Formats: 8 00:10:20.187 Current LBA Format: LBA Format #04 00:10:20.187 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.187 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:20.187 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:20.187 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:20.187 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:20.187 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:20.187 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:20.187 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:20.187 00:10:20.187 Namespace ID:2 00:10:20.187 Error Recovery Timeout: Unlimited 00:10:20.187 Command Set Identifier: NVM (00h) 00:10:20.187 Deallocate: Supported 00:10:20.187 Deallocated/Unwritten Error: Supported 00:10:20.187 Deallocated Read Value: All 0x00 00:10:20.187 Deallocate in Write Zeroes: Not Supported 00:10:20.187 Deallocated Guard Field: 0xFFFF 00:10:20.187 Flush: Supported 00:10:20.187 Reservation: Not Supported 00:10:20.187 Namespace Sharing Capabilities: Private 00:10:20.187 Size (in LBAs): 1048576 (4GiB) 00:10:20.187 Capacity (in LBAs): 1048576 (4GiB) 00:10:20.187 Utilization (in LBAs): 1048576 (4GiB) 00:10:20.187 Thin Provisioning: Not Supported 00:10:20.187 Per-NS Atomic Units: No 00:10:20.187 Maximum Single Source Range Length: 128 00:10:20.187 Maximum Copy Length: 128 00:10:20.187 Maximum Source Range Count: 128 00:10:20.187 NGUID/EUI64 Never Reused: No 00:10:20.187 Namespace Write Protected: No 00:10:20.187 Number of LBA Formats: 8 00:10:20.187 Current LBA Format: LBA Format #04 00:10:20.187 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.187 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:20.187 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:20.187 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:20.187 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:20.187 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:20.187 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:20.187 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:20.187 00:10:20.187 Namespace ID:3 00:10:20.187 Error Recovery Timeout: Unlimited 00:10:20.187 Command Set Identifier: NVM (00h) 00:10:20.187 Deallocate: Supported 00:10:20.187 Deallocated/Unwritten Error: Supported 00:10:20.187 Deallocated Read Value: All 0x00 00:10:20.187 Deallocate in Write Zeroes: Not Supported 00:10:20.187 Deallocated Guard Field: 0xFFFF 00:10:20.187 Flush: Supported 00:10:20.187 Reservation: Not Supported 00:10:20.187 Namespace Sharing Capabilities: Private 00:10:20.187 Size (in LBAs): 1048576 (4GiB) 00:10:20.187 Capacity (in LBAs): 1048576 (4GiB) 00:10:20.187 Utilization (in LBAs): 1048576 (4GiB) 00:10:20.187 Thin Provisioning: Not Supported 00:10:20.187 Per-NS Atomic Units: No 00:10:20.187 Maximum Single Source Range Length: 128 00:10:20.187 Maximum Copy Length: 128 00:10:20.187 Maximum Source Range Count: 128 00:10:20.187 NGUID/EUI64 Never Reused: No 00:10:20.187 Namespace Write Protected: No 00:10:20.187 Number of LBA Formats: 8 00:10:20.187 Current LBA Format: LBA Format #04 00:10:20.187 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.187 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:20.187 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:20.187 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:20.187 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:20.187 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:20.187 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:20.187 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:20.187 00:10:20.187 20:56:41 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:20.187 20:56:41 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:10:20.447 ===================================================== 00:10:20.447 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:20.447 ===================================================== 00:10:20.447 Controller Capabilities/Features 00:10:20.447 ================================ 00:10:20.447 Vendor ID: 1b36 00:10:20.447 Subsystem Vendor ID: 1af4 00:10:20.447 Serial Number: 12343 00:10:20.447 Model Number: QEMU NVMe Ctrl 00:10:20.447 Firmware Version: 8.0.0 00:10:20.447 Recommended Arb Burst: 6 00:10:20.447 IEEE OUI Identifier: 00 54 52 00:10:20.447 Multi-path I/O 00:10:20.447 May have multiple subsystem ports: No 00:10:20.447 May have multiple controllers: Yes 00:10:20.447 Associated with SR-IOV VF: No 00:10:20.447 Max Data Transfer Size: 524288 00:10:20.448 Max Number of Namespaces: 256 00:10:20.448 Max Number of I/O Queues: 64 00:10:20.448 NVMe Specification Version (VS): 1.4 00:10:20.448 NVMe Specification Version (Identify): 1.4 00:10:20.448 Maximum Queue Entries: 2048 00:10:20.448 Contiguous Queues Required: Yes 00:10:20.448 Arbitration Mechanisms Supported 00:10:20.448 Weighted Round Robin: Not Supported 00:10:20.448 Vendor Specific: Not Supported 00:10:20.448 Reset Timeout: 7500 ms 00:10:20.448 Doorbell Stride: 4 bytes 00:10:20.448 NVM Subsystem Reset: Not Supported 00:10:20.448 Command Sets Supported 00:10:20.448 NVM Command Set: Supported 00:10:20.448 Boot Partition: Not Supported 00:10:20.448 Memory Page Size Minimum: 4096 bytes 00:10:20.448 Memory Page Size Maximum: 65536 bytes 00:10:20.448 Persistent Memory Region: Not Supported 00:10:20.448 Optional Asynchronous Events Supported 00:10:20.448 Namespace Attribute Notices: Supported 00:10:20.448 Firmware Activation Notices: Not Supported 00:10:20.448 ANA Change Notices: Not Supported 00:10:20.448 PLE Aggregate Log Change Notices: Not Supported 00:10:20.448 LBA Status Info Alert Notices: Not Supported 00:10:20.448 EGE Aggregate Log Change Notices: Not Supported 00:10:20.448 Normal NVM Subsystem Shutdown event: Not Supported 00:10:20.448 Zone Descriptor Change Notices: Not Supported 00:10:20.448 Discovery Log Change Notices: Not Supported 00:10:20.448 Controller Attributes 00:10:20.448 128-bit Host Identifier: Not Supported 00:10:20.448 Non-Operational Permissive Mode: Not Supported 00:10:20.448 NVM Sets: Not Supported 00:10:20.448 Read Recovery Levels: Not Supported 00:10:20.448 Endurance Groups: Supported 00:10:20.448 Predictable Latency Mode: Not Supported 00:10:20.448 Traffic Based Keep ALive: Not Supported 00:10:20.448 Namespace Granularity: Not Supported 00:10:20.448 SQ Associations: Not Supported 00:10:20.448 UUID List: Not Supported 00:10:20.448 Multi-Domain Subsystem: Not Supported 00:10:20.448 Fixed Capacity Management: Not Supported 00:10:20.448 Variable Capacity Management: Not Supported 00:10:20.448 Delete Endurance Group: Not Supported 00:10:20.448 Delete NVM Set: Not Supported 00:10:20.448 Extended LBA Formats Supported: Supported 00:10:20.448 Flexible Data Placement Supported: Supported 00:10:20.448 00:10:20.448 Controller Memory Buffer Support 00:10:20.448 ================================ 00:10:20.448 Supported: No 00:10:20.448 00:10:20.448 Persistent Memory Region Support 00:10:20.448 ================================ 00:10:20.448 Supported: No 00:10:20.448 00:10:20.448 Admin Command Set Attributes 00:10:20.448 ============================ 00:10:20.448 Security Send/Receive: Not Supported 00:10:20.448 Format NVM: Supported 00:10:20.448 Firmware Activate/Download: Not Supported 00:10:20.448 Namespace Management: Supported 00:10:20.448 Device Self-Test: Not Supported 00:10:20.448 Directives: Supported 00:10:20.448 NVMe-MI: Not Supported 00:10:20.448 Virtualization Management: Not Supported 00:10:20.448 Doorbell Buffer Config: Supported 00:10:20.448 Get LBA Status Capability: Not Supported 00:10:20.448 Command & Feature Lockdown Capability: Not Supported 00:10:20.448 Abort Command Limit: 4 00:10:20.448 Async Event Request Limit: 4 00:10:20.448 Number of Firmware Slots: N/A 00:10:20.448 Firmware Slot 1 Read-Only: N/A 00:10:20.448 Firmware Activation Without Reset: N/A 00:10:20.448 Multiple Update Detection Support: N/A 00:10:20.448 Firmware Update Granularity: No Information Provided 00:10:20.448 Per-Namespace SMART Log: Yes 00:10:20.448 Asymmetric Namespace Access Log Page: Not Supported 00:10:20.448 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:20.448 Command Effects Log Page: Supported 00:10:20.448 Get Log Page Extended Data: Supported 00:10:20.448 Telemetry Log Pages: Not Supported 00:10:20.448 Persistent Event Log Pages: Not Supported 00:10:20.448 Supported Log Pages Log Page: May Support 00:10:20.448 Commands Supported & Effects Log Page: Not Supported 00:10:20.448 Feature Identifiers & Effects Log Page:May Support 00:10:20.448 NVMe-MI Commands & Effects Log Page: May Support 00:10:20.448 Data Area 4 for Telemetry Log: Not Supported 00:10:20.448 Error Log Page Entries Supported: 1 00:10:20.448 Keep Alive: Not Supported 00:10:20.448 00:10:20.448 NVM Command Set Attributes 00:10:20.448 ========================== 00:10:20.448 Submission Queue Entry Size 00:10:20.448 Max: 64 00:10:20.448 Min: 64 00:10:20.448 Completion Queue Entry Size 00:10:20.448 Max: 16 00:10:20.448 Min: 16 00:10:20.448 Number of Namespaces: 256 00:10:20.448 Compare Command: Supported 00:10:20.448 Write Uncorrectable Command: Not Supported 00:10:20.448 Dataset Management Command: Supported 00:10:20.448 Write Zeroes Command: Supported 00:10:20.448 Set Features Save Field: Supported 00:10:20.448 Reservations: Not Supported 00:10:20.448 Timestamp: Supported 00:10:20.448 Copy: Supported 00:10:20.448 Volatile Write Cache: Present 00:10:20.448 Atomic Write Unit (Normal): 1 00:10:20.448 Atomic Write Unit (PFail): 1 00:10:20.448 Atomic Compare & Write Unit: 1 00:10:20.448 Fused Compare & Write: Not Supported 00:10:20.448 Scatter-Gather List 00:10:20.448 SGL Command Set: Supported 00:10:20.448 SGL Keyed: Not Supported 00:10:20.448 SGL Bit Bucket Descriptor: Not Supported 00:10:20.448 SGL Metadata Pointer: Not Supported 00:10:20.448 Oversized SGL: Not Supported 00:10:20.448 SGL Metadata Address: Not Supported 00:10:20.448 SGL Offset: Not Supported 00:10:20.448 Transport SGL Data Block: Not Supported 00:10:20.448 Replay Protected Memory Block: Not Supported 00:10:20.448 00:10:20.448 Firmware Slot Information 00:10:20.448 ========================= 00:10:20.448 Active slot: 1 00:10:20.448 Slot 1 Firmware Revision: 1.0 00:10:20.448 00:10:20.448 00:10:20.448 Commands Supported and Effects 00:10:20.448 ============================== 00:10:20.448 Admin Commands 00:10:20.448 -------------- 00:10:20.448 Delete I/O Submission Queue (00h): Supported 00:10:20.448 Create I/O Submission Queue (01h): Supported 00:10:20.448 Get Log Page (02h): Supported 00:10:20.448 Delete I/O Completion Queue (04h): Supported 00:10:20.448 Create I/O Completion Queue (05h): Supported 00:10:20.448 Identify (06h): Supported 00:10:20.448 Abort (08h): Supported 00:10:20.448 Set Features (09h): Supported 00:10:20.448 Get Features (0Ah): Supported 00:10:20.448 Asynchronous Event Request (0Ch): Supported 00:10:20.448 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:20.448 Directive Send (19h): Supported 00:10:20.448 Directive Receive (1Ah): Supported 00:10:20.448 Virtualization Management (1Ch): Supported 00:10:20.448 Doorbell Buffer Config (7Ch): Supported 00:10:20.448 Format NVM (80h): Supported LBA-Change 00:10:20.448 I/O Commands 00:10:20.448 ------------ 00:10:20.448 Flush (00h): Supported LBA-Change 00:10:20.448 Write (01h): Supported LBA-Change 00:10:20.448 Read (02h): Supported 00:10:20.448 Compare (05h): Supported 00:10:20.448 Write Zeroes (08h): Supported LBA-Change 00:10:20.448 Dataset Management (09h): Supported LBA-Change 00:10:20.448 Unknown (0Ch): Supported 00:10:20.448 Unknown (12h): Supported 00:10:20.448 Copy (19h): Supported LBA-Change 00:10:20.448 Unknown (1Dh): Supported LBA-Change 00:10:20.448 00:10:20.448 Error Log 00:10:20.448 ========= 00:10:20.448 00:10:20.448 Arbitration 00:10:20.448 =========== 00:10:20.448 Arbitration Burst: no limit 00:10:20.448 00:10:20.448 Power Management 00:10:20.448 ================ 00:10:20.448 Number of Power States: 1 00:10:20.448 Current Power State: Power State #0 00:10:20.448 Power State #0: 00:10:20.448 Max Power: 25.00 W 00:10:20.448 Non-Operational State: Operational 00:10:20.448 Entry Latency: 16 microseconds 00:10:20.448 Exit Latency: 4 microseconds 00:10:20.448 Relative Read Throughput: 0 00:10:20.448 Relative Read Latency: 0 00:10:20.448 Relative Write Throughput: 0 00:10:20.448 Relative Write Latency: 0 00:10:20.448 Idle Power: Not Reported 00:10:20.448 Active Power: Not Reported 00:10:20.448 Non-Operational Permissive Mode: Not Supported 00:10:20.448 00:10:20.448 Health Information 00:10:20.448 ================== 00:10:20.448 Critical Warnings: 00:10:20.448 Available Spare Space: OK 00:10:20.448 Temperature: OK 00:10:20.448 Device Reliability: OK 00:10:20.448 Read Only: No 00:10:20.448 Volatile Memory Backup: OK 00:10:20.448 Current Temperature: 323 Kelvin (50 Celsius) 00:10:20.448 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:20.448 Available Spare: 0% 00:10:20.448 Available Spare Threshold: 0% 00:10:20.448 Life Percentage Used: 0% 00:10:20.449 Data Units Read: 1354 00:10:20.449 Data Units Written: 626 00:10:20.449 Host Read Commands: 60530 00:10:20.449 Host Write Commands: 29707 00:10:20.449 Controller Busy Time: 0 minutes 00:10:20.449 Power Cycles: 0 00:10:20.449 Power On Hours: 0 hours 00:10:20.449 Unsafe Shutdowns: 0 00:10:20.449 Unrecoverable Media Errors: 0 00:10:20.449 Lifetime Error Log Entries: 0 00:10:20.449 Warning Temperature Time: 0 minutes 00:10:20.449 Critical Temperature Time: 0 minutes 00:10:20.449 00:10:20.449 Number of Queues 00:10:20.449 ================ 00:10:20.449 Number of I/O Submission Queues: 64 00:10:20.449 Number of I/O Completion Queues: 64 00:10:20.449 00:10:20.449 ZNS Specific Controller Data 00:10:20.449 ============================ 00:10:20.449 Zone Append Size Limit: 0 00:10:20.449 00:10:20.449 00:10:20.449 Active Namespaces 00:10:20.449 ================= 00:10:20.449 Namespace ID:1 00:10:20.449 Error Recovery Timeout: Unlimited 00:10:20.449 Command Set Identifier: NVM (00h) 00:10:20.449 Deallocate: Supported 00:10:20.449 Deallocated/Unwritten Error: Supported 00:10:20.449 Deallocated Read Value: All 0x00 00:10:20.449 Deallocate in Write Zeroes: Not Supported 00:10:20.449 Deallocated Guard Field: 0xFFFF 00:10:20.449 Flush: Supported 00:10:20.449 Reservation: Not Supported 00:10:20.449 Namespace Sharing Capabilities: Multiple Controllers 00:10:20.449 Size (in LBAs): 262144 (1GiB) 00:10:20.449 Capacity (in LBAs): 262144 (1GiB) 00:10:20.449 Utilization (in LBAs): 262144 (1GiB) 00:10:20.449 Thin Provisioning: Not Supported 00:10:20.449 Per-NS Atomic Units: No 00:10:20.449 Maximum Single Source Range Length: 128 00:10:20.449 Maximum Copy Length: 128 00:10:20.449 Maximum Source Range Count: 128 00:10:20.449 NGUID/EUI64 Never Reused: No 00:10:20.449 Namespace Write Protected: No 00:10:20.449 Endurance group ID: 1 00:10:20.449 Number of LBA Formats: 8 00:10:20.449 Current LBA Format: LBA Format #04 00:10:20.449 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.449 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:20.449 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:20.449 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:20.449 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:20.449 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:20.449 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:20.449 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:20.449 00:10:20.449 Get Feature FDP: 00:10:20.449 ================ 00:10:20.449 Enabled: Yes 00:10:20.449 FDP configuration index: 0 00:10:20.449 00:10:20.449 FDP configurations log page 00:10:20.449 =========================== 00:10:20.449 Number of FDP configurations: 1 00:10:20.449 Version: 0 00:10:20.449 Size: 112 00:10:20.449 FDP Configuration Descriptor: 0 00:10:20.449 Descriptor Size: 96 00:10:20.449 Reclaim Group Identifier format: 2 00:10:20.449 FDP Volatile Write Cache: Not Present 00:10:20.449 FDP Configuration: Valid 00:10:20.449 Vendor Specific Size: 0 00:10:20.449 Number of Reclaim Groups: 2 00:10:20.449 Number of Recalim Unit Handles: 8 00:10:20.449 Max Placement Identifiers: 128 00:10:20.449 Number of Namespaces Suppprted: 256 00:10:20.449 Reclaim unit Nominal Size: 6000000 bytes 00:10:20.449 Estimated Reclaim Unit Time Limit: Not Reported 00:10:20.449 RUH Desc #000: RUH Type: Initially Isolated 00:10:20.449 RUH Desc #001: RUH Type: Initially Isolated 00:10:20.449 RUH Desc #002: RUH Type: Initially Isolated 00:10:20.449 RUH Desc #003: RUH Type: Initially Isolated 00:10:20.449 RUH Desc #004: RUH Type: Initially Isolated 00:10:20.449 RUH Desc #005: RUH Type: Initially Isolated 00:10:20.449 RUH Desc #006: RUH Type: Initially Isolated 00:10:20.449 RUH Desc #007: RUH Type: Initially Isolated 00:10:20.449 00:10:20.449 FDP reclaim unit handle usage log page 00:10:20.449 ====================================== 00:10:20.449 Number of Reclaim Unit Handles: 8 00:10:20.449 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:20.449 RUH Usage Desc #001: RUH Attributes: Unused 00:10:20.449 RUH Usage Desc #002: RUH Attributes: Unused 00:10:20.449 RUH Usage Desc #003: RUH Attributes: Unused 00:10:20.449 RUH Usage Desc #004: RUH Attributes: Unused 00:10:20.449 RUH Usage Desc #005: RUH Attributes: Unused 00:10:20.449 RUH Usage Desc #006: RUH Attributes: Unused 00:10:20.449 RUH Usage Desc #007: RUH Attributes: Unused 00:10:20.449 00:10:20.449 FDP statistics log page 00:10:20.449 ======================= 00:10:20.449 Host bytes with metadata written: 404725760 00:10:20.449 Media bytes with metadata written: 404787200 00:10:20.449 Media bytes erased: 0 00:10:20.449 00:10:20.449 FDP events log page 00:10:20.449 =================== 00:10:20.449 Number of FDP events: 0 00:10:20.449 00:10:20.449 ************************************ 00:10:20.449 END TEST nvme_identify 00:10:20.449 ************************************ 00:10:20.449 00:10:20.449 real 0m1.623s 00:10:20.449 user 0m0.675s 00:10:20.449 sys 0m0.745s 00:10:20.449 20:56:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:20.449 20:56:41 -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 20:56:41 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:20.449 20:56:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:20.449 20:56:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.449 20:56:41 -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 ************************************ 00:10:20.449 START TEST nvme_perf 00:10:20.449 ************************************ 00:10:20.449 20:56:41 -- common/autotest_common.sh@1114 -- # nvme_perf 00:10:20.449 20:56:41 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:21.829 Initializing NVMe Controllers 00:10:21.829 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:21.829 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:21.829 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:21.829 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:21.829 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:21.829 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:21.829 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:21.829 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:21.829 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:21.829 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:21.829 Initialization complete. Launching workers. 00:10:21.829 ======================================================== 00:10:21.829 Latency(us) 00:10:21.829 Device Information : IOPS MiB/s Average min max 00:10:21.829 PCIE (0000:00:06.0) NSID 1 from core 0: 12680.70 148.60 10084.71 7399.51 44265.12 00:10:21.829 PCIE (0000:00:07.0) NSID 1 from core 0: 12680.70 148.60 10067.04 7608.49 42392.08 00:10:21.829 PCIE (0000:00:09.0) NSID 1 from core 0: 12680.70 148.60 10046.43 7538.68 41034.01 00:10:21.829 PCIE (0000:00:08.0) NSID 1 from core 0: 12680.70 148.60 10025.85 7589.04 38893.59 00:10:21.829 PCIE (0000:00:08.0) NSID 2 from core 0: 12807.51 150.09 9904.66 7545.68 26711.83 00:10:21.829 PCIE (0000:00:08.0) NSID 3 from core 0: 12807.51 150.09 9883.04 7589.17 24893.41 00:10:21.829 ======================================================== 00:10:21.829 Total : 76337.81 894.58 10001.60 7399.51 44265.12 00:10:21.829 00:10:21.829 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:21.829 ================================================================================= 00:10:21.829 1.00000% : 7685.585us 00:10:21.829 10.00000% : 8519.680us 00:10:21.829 25.00000% : 9055.884us 00:10:21.829 50.00000% : 9711.244us 00:10:21.829 75.00000% : 10426.182us 00:10:21.829 90.00000% : 11021.964us 00:10:21.829 95.00000% : 11498.589us 00:10:21.829 98.00000% : 14239.185us 00:10:21.829 99.00000% : 15490.327us 00:10:21.829 99.50000% : 41704.727us 00:10:21.829 99.90000% : 43611.229us 00:10:21.829 99.99000% : 44326.167us 00:10:21.829 99.99900% : 44326.167us 00:10:21.829 99.99990% : 44326.167us 00:10:21.829 99.99999% : 44326.167us 00:10:21.829 00:10:21.829 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:21.829 ================================================================================= 00:10:21.829 1.00000% : 7864.320us 00:10:21.829 10.00000% : 8579.258us 00:10:21.829 25.00000% : 9115.462us 00:10:21.829 50.00000% : 9711.244us 00:10:21.829 75.00000% : 10366.604us 00:10:21.829 90.00000% : 10902.807us 00:10:21.829 95.00000% : 11677.324us 00:10:21.829 98.00000% : 13822.138us 00:10:21.829 99.00000% : 15252.015us 00:10:21.829 99.50000% : 40036.538us 00:10:21.829 99.90000% : 42181.353us 00:10:21.829 99.99000% : 42419.665us 00:10:21.829 99.99900% : 42419.665us 00:10:21.829 99.99990% : 42419.665us 00:10:21.830 99.99999% : 42419.665us 00:10:21.830 00:10:21.830 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:21.830 ================================================================================= 00:10:21.830 1.00000% : 7864.320us 00:10:21.830 10.00000% : 8579.258us 00:10:21.830 25.00000% : 9115.462us 00:10:21.830 50.00000% : 9711.244us 00:10:21.830 75.00000% : 10366.604us 00:10:21.830 90.00000% : 10962.385us 00:10:21.830 95.00000% : 11796.480us 00:10:21.830 98.00000% : 12868.887us 00:10:21.830 99.00000% : 14239.185us 00:10:21.830 99.50000% : 38844.975us 00:10:21.830 99.90000% : 40751.476us 00:10:21.830 99.99000% : 41228.102us 00:10:21.830 99.99900% : 41228.102us 00:10:21.830 99.99990% : 41228.102us 00:10:21.830 99.99999% : 41228.102us 00:10:21.830 00:10:21.830 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:21.830 ================================================================================= 00:10:21.830 1.00000% : 7864.320us 00:10:21.830 10.00000% : 8579.258us 00:10:21.830 25.00000% : 9175.040us 00:10:21.830 50.00000% : 9711.244us 00:10:21.830 75.00000% : 10366.604us 00:10:21.830 90.00000% : 10902.807us 00:10:21.830 95.00000% : 11736.902us 00:10:21.830 98.00000% : 13047.622us 00:10:21.830 99.00000% : 14298.764us 00:10:21.830 99.50000% : 36700.160us 00:10:21.830 99.90000% : 38606.662us 00:10:21.830 99.99000% : 39083.287us 00:10:21.830 99.99900% : 39083.287us 00:10:21.830 99.99990% : 39083.287us 00:10:21.830 99.99999% : 39083.287us 00:10:21.830 00:10:21.830 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:21.830 ================================================================================= 00:10:21.830 1.00000% : 7864.320us 00:10:21.830 10.00000% : 8579.258us 00:10:21.830 25.00000% : 9115.462us 00:10:21.830 50.00000% : 9711.244us 00:10:21.830 75.00000% : 10366.604us 00:10:21.830 90.00000% : 10902.807us 00:10:21.830 95.00000% : 11796.480us 00:10:21.830 98.00000% : 13583.825us 00:10:21.830 99.00000% : 15073.280us 00:10:21.830 99.50000% : 24188.742us 00:10:21.830 99.90000% : 26333.556us 00:10:21.830 99.99000% : 26691.025us 00:10:21.830 99.99900% : 26810.182us 00:10:21.830 99.99990% : 26810.182us 00:10:21.830 99.99999% : 26810.182us 00:10:21.830 00:10:21.830 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:21.830 ================================================================================= 00:10:21.830 1.00000% : 7923.898us 00:10:21.830 10.00000% : 8638.836us 00:10:21.830 25.00000% : 9115.462us 00:10:21.830 50.00000% : 9711.244us 00:10:21.830 75.00000% : 10366.604us 00:10:21.830 90.00000% : 10843.229us 00:10:21.830 95.00000% : 11498.589us 00:10:21.830 98.00000% : 14120.029us 00:10:21.830 99.00000% : 15252.015us 00:10:21.830 99.50000% : 22401.396us 00:10:21.830 99.90000% : 24427.055us 00:10:21.830 99.99000% : 24903.680us 00:10:21.830 99.99900% : 24903.680us 00:10:21.830 99.99990% : 24903.680us 00:10:21.830 99.99999% : 24903.680us 00:10:21.830 00:10:21.830 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:21.830 ============================================================================== 00:10:21.830 Range in us Cumulative IO count 00:10:21.830 7387.695 - 7417.484: 0.0156% ( 2) 00:10:21.830 7417.484 - 7447.273: 0.0859% ( 9) 00:10:21.830 7447.273 - 7477.062: 0.1641% ( 10) 00:10:21.830 7477.062 - 7506.851: 0.2891% ( 16) 00:10:21.830 7506.851 - 7536.640: 0.3984% ( 14) 00:10:21.830 7536.640 - 7566.429: 0.5000% ( 13) 00:10:21.830 7566.429 - 7596.218: 0.6797% ( 23) 00:10:21.830 7596.218 - 7626.007: 0.8359% ( 20) 00:10:21.830 7626.007 - 7685.585: 1.1797% ( 44) 00:10:21.830 7685.585 - 7745.164: 1.5781% ( 51) 00:10:21.830 7745.164 - 7804.742: 2.0625% ( 62) 00:10:21.830 7804.742 - 7864.320: 2.7109% ( 83) 00:10:21.830 7864.320 - 7923.898: 3.3359% ( 80) 00:10:21.830 7923.898 - 7983.476: 3.9453% ( 78) 00:10:21.830 7983.476 - 8043.055: 4.6094% ( 85) 00:10:21.830 8043.055 - 8102.633: 5.2734% ( 85) 00:10:21.830 8102.633 - 8162.211: 5.9688% ( 89) 00:10:21.830 8162.211 - 8221.789: 6.6953% ( 93) 00:10:21.830 8221.789 - 8281.367: 7.3906% ( 89) 00:10:21.830 8281.367 - 8340.945: 8.1172% ( 93) 00:10:21.830 8340.945 - 8400.524: 8.9766% ( 110) 00:10:21.830 8400.524 - 8460.102: 9.9062% ( 119) 00:10:21.830 8460.102 - 8519.680: 10.8984% ( 127) 00:10:21.830 8519.680 - 8579.258: 12.1250% ( 157) 00:10:21.830 8579.258 - 8638.836: 13.4609% ( 171) 00:10:21.830 8638.836 - 8698.415: 14.9219% ( 187) 00:10:21.830 8698.415 - 8757.993: 16.5156% ( 204) 00:10:21.830 8757.993 - 8817.571: 18.2812% ( 226) 00:10:21.830 8817.571 - 8877.149: 20.1250% ( 236) 00:10:21.830 8877.149 - 8936.727: 21.9922% ( 239) 00:10:21.830 8936.727 - 8996.305: 24.1484% ( 276) 00:10:21.830 8996.305 - 9055.884: 26.2344% ( 267) 00:10:21.830 9055.884 - 9115.462: 28.4453% ( 283) 00:10:21.830 9115.462 - 9175.040: 30.5391% ( 268) 00:10:21.830 9175.040 - 9234.618: 32.7109% ( 278) 00:10:21.830 9234.618 - 9294.196: 34.8516% ( 274) 00:10:21.830 9294.196 - 9353.775: 37.1016% ( 288) 00:10:21.830 9353.775 - 9413.353: 39.1875% ( 267) 00:10:21.830 9413.353 - 9472.931: 41.3516% ( 277) 00:10:21.830 9472.931 - 9532.509: 43.4609% ( 270) 00:10:21.830 9532.509 - 9592.087: 45.7344% ( 291) 00:10:21.830 9592.087 - 9651.665: 47.8828% ( 275) 00:10:21.830 9651.665 - 9711.244: 50.1797% ( 294) 00:10:21.830 9711.244 - 9770.822: 52.3672% ( 280) 00:10:21.830 9770.822 - 9830.400: 54.6172% ( 288) 00:10:21.830 9830.400 - 9889.978: 56.8594% ( 287) 00:10:21.830 9889.978 - 9949.556: 59.0938% ( 286) 00:10:21.830 9949.556 - 10009.135: 61.2969% ( 282) 00:10:21.830 10009.135 - 10068.713: 63.5000% ( 282) 00:10:21.830 10068.713 - 10128.291: 65.4922% ( 255) 00:10:21.830 10128.291 - 10187.869: 67.5703% ( 266) 00:10:21.830 10187.869 - 10247.447: 69.3828% ( 232) 00:10:21.830 10247.447 - 10307.025: 71.4141% ( 260) 00:10:21.830 10307.025 - 10366.604: 73.2812% ( 239) 00:10:21.830 10366.604 - 10426.182: 75.0234% ( 223) 00:10:21.830 10426.182 - 10485.760: 76.7422% ( 220) 00:10:21.830 10485.760 - 10545.338: 78.3516% ( 206) 00:10:21.830 10545.338 - 10604.916: 79.9609% ( 206) 00:10:21.830 10604.916 - 10664.495: 81.5547% ( 204) 00:10:21.830 10664.495 - 10724.073: 83.1562% ( 205) 00:10:21.830 10724.073 - 10783.651: 84.6641% ( 193) 00:10:21.830 10783.651 - 10843.229: 86.2109% ( 198) 00:10:21.830 10843.229 - 10902.807: 87.6484% ( 184) 00:10:21.830 10902.807 - 10962.385: 89.0469% ( 179) 00:10:21.830 10962.385 - 11021.964: 90.2109% ( 149) 00:10:21.830 11021.964 - 11081.542: 91.2656% ( 135) 00:10:21.830 11081.542 - 11141.120: 92.1875% ( 118) 00:10:21.830 11141.120 - 11200.698: 92.9453% ( 97) 00:10:21.830 11200.698 - 11260.276: 93.5625% ( 79) 00:10:21.830 11260.276 - 11319.855: 94.0781% ( 66) 00:10:21.830 11319.855 - 11379.433: 94.4766% ( 51) 00:10:21.830 11379.433 - 11439.011: 94.7656% ( 37) 00:10:21.830 11439.011 - 11498.589: 95.0156% ( 32) 00:10:21.830 11498.589 - 11558.167: 95.2422% ( 29) 00:10:21.830 11558.167 - 11617.745: 95.4453% ( 26) 00:10:21.830 11617.745 - 11677.324: 95.6094% ( 21) 00:10:21.830 11677.324 - 11736.902: 95.7266% ( 15) 00:10:21.830 11736.902 - 11796.480: 95.8516% ( 16) 00:10:21.830 11796.480 - 11856.058: 95.9453% ( 12) 00:10:21.830 11856.058 - 11915.636: 96.0391% ( 12) 00:10:21.830 11915.636 - 11975.215: 96.1016% ( 8) 00:10:21.830 11975.215 - 12034.793: 96.1406% ( 5) 00:10:21.830 12034.793 - 12094.371: 96.2031% ( 8) 00:10:21.830 12094.371 - 12153.949: 96.2500% ( 6) 00:10:21.830 12153.949 - 12213.527: 96.2812% ( 4) 00:10:21.830 12213.527 - 12273.105: 96.3125% ( 4) 00:10:21.830 12273.105 - 12332.684: 96.3359% ( 3) 00:10:21.830 12332.684 - 12392.262: 96.3516% ( 2) 00:10:21.830 12392.262 - 12451.840: 96.3672% ( 2) 00:10:21.830 12451.840 - 12511.418: 96.3828% ( 2) 00:10:21.830 12511.418 - 12570.996: 96.3984% ( 2) 00:10:21.830 12570.996 - 12630.575: 96.4141% ( 2) 00:10:21.831 12630.575 - 12690.153: 96.4453% ( 4) 00:10:21.831 12749.731 - 12809.309: 96.4922% ( 6) 00:10:21.831 12809.309 - 12868.887: 96.5391% ( 6) 00:10:21.831 12868.887 - 12928.465: 96.6094% ( 9) 00:10:21.831 12928.465 - 12988.044: 96.6641% ( 7) 00:10:21.831 12988.044 - 13047.622: 96.7344% ( 9) 00:10:21.831 13047.622 - 13107.200: 96.7891% ( 7) 00:10:21.831 13107.200 - 13166.778: 96.8594% ( 9) 00:10:21.831 13166.778 - 13226.356: 96.9219% ( 8) 00:10:21.831 13226.356 - 13285.935: 96.9922% ( 9) 00:10:21.831 13285.935 - 13345.513: 97.0547% ( 8) 00:10:21.831 13345.513 - 13405.091: 97.1328% ( 10) 00:10:21.831 13405.091 - 13464.669: 97.1875% ( 7) 00:10:21.831 13464.669 - 13524.247: 97.2500% ( 8) 00:10:21.831 13524.247 - 13583.825: 97.3281% ( 10) 00:10:21.831 13583.825 - 13643.404: 97.3828% ( 7) 00:10:21.831 13643.404 - 13702.982: 97.4453% ( 8) 00:10:21.831 13702.982 - 13762.560: 97.5156% ( 9) 00:10:21.831 13762.560 - 13822.138: 97.6016% ( 11) 00:10:21.831 13822.138 - 13881.716: 97.6484% ( 6) 00:10:21.831 13881.716 - 13941.295: 97.7188% ( 9) 00:10:21.831 13941.295 - 14000.873: 97.7812% ( 8) 00:10:21.831 14000.873 - 14060.451: 97.8359% ( 7) 00:10:21.831 14060.451 - 14120.029: 97.9141% ( 10) 00:10:21.831 14120.029 - 14179.607: 97.9844% ( 9) 00:10:21.831 14179.607 - 14239.185: 98.0469% ( 8) 00:10:21.831 14239.185 - 14298.764: 98.1016% ( 7) 00:10:21.831 14298.764 - 14358.342: 98.1641% ( 8) 00:10:21.831 14358.342 - 14417.920: 98.2188% ( 7) 00:10:21.831 14417.920 - 14477.498: 98.2812% ( 8) 00:10:21.831 14477.498 - 14537.076: 98.3438% ( 8) 00:10:21.831 14537.076 - 14596.655: 98.4297% ( 11) 00:10:21.831 14596.655 - 14656.233: 98.4766% ( 6) 00:10:21.831 14656.233 - 14715.811: 98.5312% ( 7) 00:10:21.831 14715.811 - 14775.389: 98.6016% ( 9) 00:10:21.831 14775.389 - 14834.967: 98.6484% ( 6) 00:10:21.831 14834.967 - 14894.545: 98.7031% ( 7) 00:10:21.831 14894.545 - 14954.124: 98.7422% ( 5) 00:10:21.831 14954.124 - 15013.702: 98.7969% ( 7) 00:10:21.831 15013.702 - 15073.280: 98.8359% ( 5) 00:10:21.831 15073.280 - 15132.858: 98.8906% ( 7) 00:10:21.831 15132.858 - 15192.436: 98.9062% ( 2) 00:10:21.831 15192.436 - 15252.015: 98.9297% ( 3) 00:10:21.831 15252.015 - 15371.171: 98.9766% ( 6) 00:10:21.831 15371.171 - 15490.327: 99.0000% ( 3) 00:10:21.831 38844.975 - 39083.287: 99.0234% ( 3) 00:10:21.831 39083.287 - 39321.600: 99.0859% ( 8) 00:10:21.831 39321.600 - 39559.913: 99.1172% ( 4) 00:10:21.831 39559.913 - 39798.225: 99.1641% ( 6) 00:10:21.831 39798.225 - 40036.538: 99.2109% ( 6) 00:10:21.831 40036.538 - 40274.851: 99.2578% ( 6) 00:10:21.831 40274.851 - 40513.164: 99.2969% ( 5) 00:10:21.831 40513.164 - 40751.476: 99.3516% ( 7) 00:10:21.831 40751.476 - 40989.789: 99.3906% ( 5) 00:10:21.831 40989.789 - 41228.102: 99.4297% ( 5) 00:10:21.831 41228.102 - 41466.415: 99.4766% ( 6) 00:10:21.831 41466.415 - 41704.727: 99.5234% ( 6) 00:10:21.831 41704.727 - 41943.040: 99.5625% ( 5) 00:10:21.831 41943.040 - 42181.353: 99.6094% ( 6) 00:10:21.831 42181.353 - 42419.665: 99.6641% ( 7) 00:10:21.831 42419.665 - 42657.978: 99.7031% ( 5) 00:10:21.831 42657.978 - 42896.291: 99.7578% ( 7) 00:10:21.831 42896.291 - 43134.604: 99.8047% ( 6) 00:10:21.831 43134.604 - 43372.916: 99.8516% ( 6) 00:10:21.831 43372.916 - 43611.229: 99.9062% ( 7) 00:10:21.831 43611.229 - 43849.542: 99.9453% ( 5) 00:10:21.831 43849.542 - 44087.855: 99.9688% ( 3) 00:10:21.831 44087.855 - 44326.167: 100.0000% ( 4) 00:10:21.831 00:10:21.831 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:21.831 ============================================================================== 00:10:21.831 Range in us Cumulative IO count 00:10:21.831 7596.218 - 7626.007: 0.0156% ( 2) 00:10:21.831 7626.007 - 7685.585: 0.2266% ( 27) 00:10:21.831 7685.585 - 7745.164: 0.5000% ( 35) 00:10:21.831 7745.164 - 7804.742: 0.8438% ( 44) 00:10:21.831 7804.742 - 7864.320: 1.2656% ( 54) 00:10:21.831 7864.320 - 7923.898: 1.7188% ( 58) 00:10:21.831 7923.898 - 7983.476: 2.1875% ( 60) 00:10:21.831 7983.476 - 8043.055: 2.7344% ( 70) 00:10:21.831 8043.055 - 8102.633: 3.4375% ( 90) 00:10:21.831 8102.633 - 8162.211: 4.2109% ( 99) 00:10:21.831 8162.211 - 8221.789: 5.0312% ( 105) 00:10:21.831 8221.789 - 8281.367: 5.9062% ( 112) 00:10:21.831 8281.367 - 8340.945: 6.6953% ( 101) 00:10:21.831 8340.945 - 8400.524: 7.6484% ( 122) 00:10:21.831 8400.524 - 8460.102: 8.5703% ( 118) 00:10:21.831 8460.102 - 8519.680: 9.4375% ( 111) 00:10:21.831 8519.680 - 8579.258: 10.4297% ( 127) 00:10:21.831 8579.258 - 8638.836: 11.3828% ( 122) 00:10:21.831 8638.836 - 8698.415: 12.4922% ( 142) 00:10:21.831 8698.415 - 8757.993: 13.7656% ( 163) 00:10:21.831 8757.993 - 8817.571: 15.1953% ( 183) 00:10:21.831 8817.571 - 8877.149: 16.8203% ( 208) 00:10:21.831 8877.149 - 8936.727: 18.5938% ( 227) 00:10:21.831 8936.727 - 8996.305: 20.6328% ( 261) 00:10:21.831 8996.305 - 9055.884: 22.7891% ( 276) 00:10:21.831 9055.884 - 9115.462: 25.1250% ( 299) 00:10:21.831 9115.462 - 9175.040: 27.5547% ( 311) 00:10:21.831 9175.040 - 9234.618: 30.0859% ( 324) 00:10:21.831 9234.618 - 9294.196: 32.5703% ( 318) 00:10:21.831 9294.196 - 9353.775: 35.1406% ( 329) 00:10:21.831 9353.775 - 9413.353: 37.7578% ( 335) 00:10:21.831 9413.353 - 9472.931: 40.3047% ( 326) 00:10:21.831 9472.931 - 9532.509: 42.9297% ( 336) 00:10:21.831 9532.509 - 9592.087: 45.5781% ( 339) 00:10:21.831 9592.087 - 9651.665: 48.1719% ( 332) 00:10:21.831 9651.665 - 9711.244: 50.7969% ( 336) 00:10:21.831 9711.244 - 9770.822: 53.3984% ( 333) 00:10:21.831 9770.822 - 9830.400: 56.0156% ( 335) 00:10:21.831 9830.400 - 9889.978: 58.4141% ( 307) 00:10:21.831 9889.978 - 9949.556: 60.7812% ( 303) 00:10:21.831 9949.556 - 10009.135: 63.0938% ( 296) 00:10:21.831 10009.135 - 10068.713: 65.3828% ( 293) 00:10:21.831 10068.713 - 10128.291: 67.6094% ( 285) 00:10:21.831 10128.291 - 10187.869: 69.8047% ( 281) 00:10:21.831 10187.869 - 10247.447: 71.9766% ( 278) 00:10:21.831 10247.447 - 10307.025: 73.9062% ( 247) 00:10:21.831 10307.025 - 10366.604: 75.8906% ( 254) 00:10:21.831 10366.604 - 10426.182: 77.8281% ( 248) 00:10:21.831 10426.182 - 10485.760: 79.7578% ( 247) 00:10:21.831 10485.760 - 10545.338: 81.6016% ( 236) 00:10:21.831 10545.338 - 10604.916: 83.4453% ( 236) 00:10:21.831 10604.916 - 10664.495: 85.2812% ( 235) 00:10:21.831 10664.495 - 10724.073: 86.9531% ( 214) 00:10:21.831 10724.073 - 10783.651: 88.4297% ( 189) 00:10:21.831 10783.651 - 10843.229: 89.7188% ( 165) 00:10:21.831 10843.229 - 10902.807: 90.8672% ( 147) 00:10:21.831 10902.807 - 10962.385: 91.7891% ( 118) 00:10:21.831 10962.385 - 11021.964: 92.4922% ( 90) 00:10:21.831 11021.964 - 11081.542: 93.0625% ( 73) 00:10:21.831 11081.542 - 11141.120: 93.5000% ( 56) 00:10:21.831 11141.120 - 11200.698: 93.8359% ( 43) 00:10:21.831 11200.698 - 11260.276: 94.0469% ( 27) 00:10:21.831 11260.276 - 11319.855: 94.2422% ( 25) 00:10:21.831 11319.855 - 11379.433: 94.4141% ( 22) 00:10:21.831 11379.433 - 11439.011: 94.5625% ( 19) 00:10:21.831 11439.011 - 11498.589: 94.6797% ( 15) 00:10:21.831 11498.589 - 11558.167: 94.7891% ( 14) 00:10:21.831 11558.167 - 11617.745: 94.9141% ( 16) 00:10:21.831 11617.745 - 11677.324: 95.0156% ( 13) 00:10:21.831 11677.324 - 11736.902: 95.1328% ( 15) 00:10:21.831 11736.902 - 11796.480: 95.2344% ( 13) 00:10:21.831 11796.480 - 11856.058: 95.3047% ( 9) 00:10:21.831 11856.058 - 11915.636: 95.3672% ( 8) 00:10:21.831 11915.636 - 11975.215: 95.4297% ( 8) 00:10:21.831 11975.215 - 12034.793: 95.5078% ( 10) 00:10:21.831 12034.793 - 12094.371: 95.6016% ( 12) 00:10:21.831 12094.371 - 12153.949: 95.6797% ( 10) 00:10:21.831 12153.949 - 12213.527: 95.7812% ( 13) 00:10:21.831 12213.527 - 12273.105: 95.8906% ( 14) 00:10:21.831 12273.105 - 12332.684: 95.9766% ( 11) 00:10:21.831 12332.684 - 12392.262: 96.0859% ( 14) 00:10:21.831 12392.262 - 12451.840: 96.1797% ( 12) 00:10:21.831 12451.840 - 12511.418: 96.2656% ( 11) 00:10:21.831 12511.418 - 12570.996: 96.3438% ( 10) 00:10:21.831 12570.996 - 12630.575: 96.4609% ( 15) 00:10:21.831 12630.575 - 12690.153: 96.5703% ( 14) 00:10:21.831 12690.153 - 12749.731: 96.6719% ( 13) 00:10:21.831 12749.731 - 12809.309: 96.7500% ( 10) 00:10:21.831 12809.309 - 12868.887: 96.8281% ( 10) 00:10:21.831 12868.887 - 12928.465: 96.8984% ( 9) 00:10:21.831 12928.465 - 12988.044: 96.9766% ( 10) 00:10:21.831 12988.044 - 13047.622: 97.0469% ( 9) 00:10:21.831 13047.622 - 13107.200: 97.1250% ( 10) 00:10:21.832 13107.200 - 13166.778: 97.2109% ( 11) 00:10:21.832 13166.778 - 13226.356: 97.2891% ( 10) 00:10:21.832 13226.356 - 13285.935: 97.3672% ( 10) 00:10:21.832 13285.935 - 13345.513: 97.4531% ( 11) 00:10:21.832 13345.513 - 13405.091: 97.5156% ( 8) 00:10:21.832 13405.091 - 13464.669: 97.6016% ( 11) 00:10:21.832 13464.669 - 13524.247: 97.6719% ( 9) 00:10:21.832 13524.247 - 13583.825: 97.7422% ( 9) 00:10:21.832 13583.825 - 13643.404: 97.8203% ( 10) 00:10:21.832 13643.404 - 13702.982: 97.8984% ( 10) 00:10:21.832 13702.982 - 13762.560: 97.9766% ( 10) 00:10:21.832 13762.560 - 13822.138: 98.0547% ( 10) 00:10:21.832 13822.138 - 13881.716: 98.1250% ( 9) 00:10:21.832 13881.716 - 13941.295: 98.1953% ( 9) 00:10:21.832 13941.295 - 14000.873: 98.2578% ( 8) 00:10:21.832 14000.873 - 14060.451: 98.3047% ( 6) 00:10:21.832 14060.451 - 14120.029: 98.3594% ( 7) 00:10:21.832 14120.029 - 14179.607: 98.4062% ( 6) 00:10:21.832 14179.607 - 14239.185: 98.4531% ( 6) 00:10:21.832 14239.185 - 14298.764: 98.5000% ( 6) 00:10:21.832 14298.764 - 14358.342: 98.5547% ( 7) 00:10:21.832 14358.342 - 14417.920: 98.5938% ( 5) 00:10:21.832 14417.920 - 14477.498: 98.6406% ( 6) 00:10:21.832 14477.498 - 14537.076: 98.6797% ( 5) 00:10:21.832 14537.076 - 14596.655: 98.7344% ( 7) 00:10:21.832 14596.655 - 14656.233: 98.7734% ( 5) 00:10:21.832 14656.233 - 14715.811: 98.8203% ( 6) 00:10:21.832 14715.811 - 14775.389: 98.8594% ( 5) 00:10:21.832 14775.389 - 14834.967: 98.8828% ( 3) 00:10:21.832 14834.967 - 14894.545: 98.8984% ( 2) 00:10:21.832 14894.545 - 14954.124: 98.9141% ( 2) 00:10:21.832 14954.124 - 15013.702: 98.9375% ( 3) 00:10:21.832 15013.702 - 15073.280: 98.9531% ( 2) 00:10:21.832 15073.280 - 15132.858: 98.9688% ( 2) 00:10:21.832 15132.858 - 15192.436: 98.9922% ( 3) 00:10:21.832 15192.436 - 15252.015: 99.0000% ( 1) 00:10:21.832 37415.098 - 37653.411: 99.0469% ( 6) 00:10:21.832 37653.411 - 37891.724: 99.0938% ( 6) 00:10:21.832 37891.724 - 38130.036: 99.1406% ( 6) 00:10:21.832 38130.036 - 38368.349: 99.1953% ( 7) 00:10:21.832 38368.349 - 38606.662: 99.2422% ( 6) 00:10:21.832 38606.662 - 38844.975: 99.2812% ( 5) 00:10:21.832 38844.975 - 39083.287: 99.3359% ( 7) 00:10:21.832 39083.287 - 39321.600: 99.3750% ( 5) 00:10:21.832 39321.600 - 39559.913: 99.4297% ( 7) 00:10:21.832 39559.913 - 39798.225: 99.4766% ( 6) 00:10:21.832 39798.225 - 40036.538: 99.5156% ( 5) 00:10:21.832 40036.538 - 40274.851: 99.5625% ( 6) 00:10:21.832 40274.851 - 40513.164: 99.6094% ( 6) 00:10:21.832 40513.164 - 40751.476: 99.6562% ( 6) 00:10:21.832 40751.476 - 40989.789: 99.7031% ( 6) 00:10:21.832 40989.789 - 41228.102: 99.7578% ( 7) 00:10:21.832 41228.102 - 41466.415: 99.8047% ( 6) 00:10:21.832 41466.415 - 41704.727: 99.8516% ( 6) 00:10:21.832 41704.727 - 41943.040: 99.8984% ( 6) 00:10:21.832 41943.040 - 42181.353: 99.9531% ( 7) 00:10:21.832 42181.353 - 42419.665: 100.0000% ( 6) 00:10:21.832 00:10:21.832 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:21.832 ============================================================================== 00:10:21.832 Range in us Cumulative IO count 00:10:21.832 7536.640 - 7566.429: 0.0156% ( 2) 00:10:21.832 7566.429 - 7596.218: 0.0312% ( 2) 00:10:21.832 7596.218 - 7626.007: 0.0625% ( 4) 00:10:21.832 7626.007 - 7685.585: 0.2109% ( 19) 00:10:21.832 7685.585 - 7745.164: 0.5078% ( 38) 00:10:21.832 7745.164 - 7804.742: 0.9062% ( 51) 00:10:21.832 7804.742 - 7864.320: 1.2891% ( 49) 00:10:21.832 7864.320 - 7923.898: 1.7422% ( 58) 00:10:21.832 7923.898 - 7983.476: 2.2734% ( 68) 00:10:21.832 7983.476 - 8043.055: 2.8359% ( 72) 00:10:21.832 8043.055 - 8102.633: 3.5078% ( 86) 00:10:21.832 8102.633 - 8162.211: 4.2500% ( 95) 00:10:21.832 8162.211 - 8221.789: 4.9922% ( 95) 00:10:21.832 8221.789 - 8281.367: 5.8359% ( 108) 00:10:21.832 8281.367 - 8340.945: 6.6172% ( 100) 00:10:21.832 8340.945 - 8400.524: 7.4922% ( 112) 00:10:21.832 8400.524 - 8460.102: 8.3516% ( 110) 00:10:21.832 8460.102 - 8519.680: 9.2500% ( 115) 00:10:21.832 8519.680 - 8579.258: 10.1797% ( 119) 00:10:21.832 8579.258 - 8638.836: 11.2031% ( 131) 00:10:21.832 8638.836 - 8698.415: 12.3281% ( 144) 00:10:21.832 8698.415 - 8757.993: 13.5547% ( 157) 00:10:21.832 8757.993 - 8817.571: 15.0000% ( 185) 00:10:21.832 8817.571 - 8877.149: 16.6016% ( 205) 00:10:21.832 8877.149 - 8936.727: 18.3984% ( 230) 00:10:21.832 8936.727 - 8996.305: 20.4609% ( 264) 00:10:21.832 8996.305 - 9055.884: 22.7266% ( 290) 00:10:21.832 9055.884 - 9115.462: 25.1562% ( 311) 00:10:21.832 9115.462 - 9175.040: 27.7031% ( 326) 00:10:21.832 9175.040 - 9234.618: 30.2500% ( 326) 00:10:21.832 9234.618 - 9294.196: 32.8125% ( 328) 00:10:21.832 9294.196 - 9353.775: 35.3125% ( 320) 00:10:21.832 9353.775 - 9413.353: 37.9375% ( 336) 00:10:21.832 9413.353 - 9472.931: 40.4609% ( 323) 00:10:21.832 9472.931 - 9532.509: 43.0391% ( 330) 00:10:21.832 9532.509 - 9592.087: 45.6406% ( 333) 00:10:21.832 9592.087 - 9651.665: 48.1172% ( 317) 00:10:21.832 9651.665 - 9711.244: 50.6406% ( 323) 00:10:21.832 9711.244 - 9770.822: 53.2109% ( 329) 00:10:21.832 9770.822 - 9830.400: 55.6953% ( 318) 00:10:21.832 9830.400 - 9889.978: 58.1016% ( 308) 00:10:21.832 9889.978 - 9949.556: 60.3828% ( 292) 00:10:21.832 9949.556 - 10009.135: 62.6406% ( 289) 00:10:21.832 10009.135 - 10068.713: 64.8438% ( 282) 00:10:21.832 10068.713 - 10128.291: 67.0625% ( 284) 00:10:21.832 10128.291 - 10187.869: 69.2188% ( 276) 00:10:21.832 10187.869 - 10247.447: 71.2578% ( 261) 00:10:21.832 10247.447 - 10307.025: 73.1953% ( 248) 00:10:21.832 10307.025 - 10366.604: 75.0781% ( 241) 00:10:21.832 10366.604 - 10426.182: 76.9141% ( 235) 00:10:21.832 10426.182 - 10485.760: 78.7891% ( 240) 00:10:21.832 10485.760 - 10545.338: 80.6016% ( 232) 00:10:21.832 10545.338 - 10604.916: 82.4688% ( 239) 00:10:21.832 10604.916 - 10664.495: 84.1953% ( 221) 00:10:21.832 10664.495 - 10724.073: 85.9453% ( 224) 00:10:21.832 10724.073 - 10783.651: 87.4766% ( 196) 00:10:21.832 10783.651 - 10843.229: 88.7422% ( 162) 00:10:21.832 10843.229 - 10902.807: 89.9375% ( 153) 00:10:21.832 10902.807 - 10962.385: 90.8359% ( 115) 00:10:21.832 10962.385 - 11021.964: 91.4844% ( 83) 00:10:21.832 11021.964 - 11081.542: 92.0547% ( 73) 00:10:21.832 11081.542 - 11141.120: 92.5234% ( 60) 00:10:21.832 11141.120 - 11200.698: 92.8984% ( 48) 00:10:21.832 11200.698 - 11260.276: 93.1953% ( 38) 00:10:21.832 11260.276 - 11319.855: 93.4375% ( 31) 00:10:21.832 11319.855 - 11379.433: 93.6641% ( 29) 00:10:21.832 11379.433 - 11439.011: 93.8516% ( 24) 00:10:21.832 11439.011 - 11498.589: 94.0781% ( 29) 00:10:21.832 11498.589 - 11558.167: 94.2734% ( 25) 00:10:21.832 11558.167 - 11617.745: 94.4922% ( 28) 00:10:21.832 11617.745 - 11677.324: 94.7031% ( 27) 00:10:21.832 11677.324 - 11736.902: 94.9297% ( 29) 00:10:21.832 11736.902 - 11796.480: 95.1328% ( 26) 00:10:21.832 11796.480 - 11856.058: 95.3516% ( 28) 00:10:21.832 11856.058 - 11915.636: 95.5703% ( 28) 00:10:21.832 11915.636 - 11975.215: 95.7734% ( 26) 00:10:21.832 11975.215 - 12034.793: 95.9922% ( 28) 00:10:21.832 12034.793 - 12094.371: 96.1797% ( 24) 00:10:21.832 12094.371 - 12153.949: 96.3516% ( 22) 00:10:21.832 12153.949 - 12213.527: 96.5156% ( 21) 00:10:21.832 12213.527 - 12273.105: 96.6953% ( 23) 00:10:21.832 12273.105 - 12332.684: 96.8594% ( 21) 00:10:21.832 12332.684 - 12392.262: 97.0156% ( 20) 00:10:21.832 12392.262 - 12451.840: 97.1719% ( 20) 00:10:21.832 12451.840 - 12511.418: 97.3125% ( 18) 00:10:21.832 12511.418 - 12570.996: 97.4531% ( 18) 00:10:21.832 12570.996 - 12630.575: 97.5703% ( 15) 00:10:21.832 12630.575 - 12690.153: 97.6797% ( 14) 00:10:21.832 12690.153 - 12749.731: 97.7891% ( 14) 00:10:21.832 12749.731 - 12809.309: 97.8984% ( 14) 00:10:21.832 12809.309 - 12868.887: 98.0156% ( 15) 00:10:21.832 12868.887 - 12928.465: 98.1172% ( 13) 00:10:21.832 12928.465 - 12988.044: 98.2344% ( 15) 00:10:21.832 12988.044 - 13047.622: 98.3359% ( 13) 00:10:21.832 13047.622 - 13107.200: 98.4297% ( 12) 00:10:21.832 13107.200 - 13166.778: 98.5000% ( 9) 00:10:21.832 13166.778 - 13226.356: 98.5859% ( 11) 00:10:21.832 13226.356 - 13285.935: 98.6562% ( 9) 00:10:21.833 13285.935 - 13345.513: 98.6953% ( 5) 00:10:21.833 13345.513 - 13405.091: 98.7422% ( 6) 00:10:21.833 13405.091 - 13464.669: 98.7578% ( 2) 00:10:21.833 13464.669 - 13524.247: 98.7812% ( 3) 00:10:21.833 13524.247 - 13583.825: 98.7969% ( 2) 00:10:21.833 13583.825 - 13643.404: 98.8125% ( 2) 00:10:21.833 13643.404 - 13702.982: 98.8281% ( 2) 00:10:21.833 13702.982 - 13762.560: 98.8438% ( 2) 00:10:21.833 13762.560 - 13822.138: 98.8672% ( 3) 00:10:21.833 13822.138 - 13881.716: 98.8828% ( 2) 00:10:21.833 13881.716 - 13941.295: 98.9062% ( 3) 00:10:21.833 13941.295 - 14000.873: 98.9219% ( 2) 00:10:21.833 14000.873 - 14060.451: 98.9453% ( 3) 00:10:21.833 14060.451 - 14120.029: 98.9688% ( 3) 00:10:21.833 14120.029 - 14179.607: 98.9844% ( 2) 00:10:21.833 14179.607 - 14239.185: 99.0000% ( 2) 00:10:21.833 36223.535 - 36461.847: 99.0078% ( 1) 00:10:21.833 36461.847 - 36700.160: 99.0547% ( 6) 00:10:21.833 36700.160 - 36938.473: 99.1016% ( 6) 00:10:21.833 36938.473 - 37176.785: 99.1484% ( 6) 00:10:21.833 37176.785 - 37415.098: 99.2031% ( 7) 00:10:21.833 37415.098 - 37653.411: 99.2500% ( 6) 00:10:21.833 37653.411 - 37891.724: 99.3047% ( 7) 00:10:21.833 37891.724 - 38130.036: 99.3516% ( 6) 00:10:21.833 38130.036 - 38368.349: 99.4062% ( 7) 00:10:21.833 38368.349 - 38606.662: 99.4609% ( 7) 00:10:21.833 38606.662 - 38844.975: 99.5078% ( 6) 00:10:21.833 38844.975 - 39083.287: 99.5625% ( 7) 00:10:21.833 39083.287 - 39321.600: 99.6094% ( 6) 00:10:21.833 39321.600 - 39559.913: 99.6641% ( 7) 00:10:21.833 39559.913 - 39798.225: 99.7188% ( 7) 00:10:21.833 39798.225 - 40036.538: 99.7734% ( 7) 00:10:21.833 40036.538 - 40274.851: 99.8281% ( 7) 00:10:21.833 40274.851 - 40513.164: 99.8828% ( 7) 00:10:21.833 40513.164 - 40751.476: 99.9297% ( 6) 00:10:21.833 40751.476 - 40989.789: 99.9844% ( 7) 00:10:21.833 40989.789 - 41228.102: 100.0000% ( 2) 00:10:21.833 00:10:21.833 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:21.833 ============================================================================== 00:10:21.833 Range in us Cumulative IO count 00:10:21.833 7566.429 - 7596.218: 0.0078% ( 1) 00:10:21.833 7596.218 - 7626.007: 0.0469% ( 5) 00:10:21.833 7626.007 - 7685.585: 0.2031% ( 20) 00:10:21.833 7685.585 - 7745.164: 0.4609% ( 33) 00:10:21.833 7745.164 - 7804.742: 0.7656% ( 39) 00:10:21.833 7804.742 - 7864.320: 1.1250% ( 46) 00:10:21.833 7864.320 - 7923.898: 1.5938% ( 60) 00:10:21.833 7923.898 - 7983.476: 2.0625% ( 60) 00:10:21.833 7983.476 - 8043.055: 2.6328% ( 73) 00:10:21.833 8043.055 - 8102.633: 3.2578% ( 80) 00:10:21.833 8102.633 - 8162.211: 4.0469% ( 101) 00:10:21.833 8162.211 - 8221.789: 4.8516% ( 103) 00:10:21.833 8221.789 - 8281.367: 5.6484% ( 102) 00:10:21.833 8281.367 - 8340.945: 6.4766% ( 106) 00:10:21.833 8340.945 - 8400.524: 7.3828% ( 116) 00:10:21.833 8400.524 - 8460.102: 8.2656% ( 113) 00:10:21.833 8460.102 - 8519.680: 9.1641% ( 115) 00:10:21.833 8519.680 - 8579.258: 10.0859% ( 118) 00:10:21.833 8579.258 - 8638.836: 11.0781% ( 127) 00:10:21.833 8638.836 - 8698.415: 12.2500% ( 150) 00:10:21.833 8698.415 - 8757.993: 13.5156% ( 162) 00:10:21.833 8757.993 - 8817.571: 15.0156% ( 192) 00:10:21.833 8817.571 - 8877.149: 16.6406% ( 208) 00:10:21.833 8877.149 - 8936.727: 18.4609% ( 233) 00:10:21.833 8936.727 - 8996.305: 20.4766% ( 258) 00:10:21.833 8996.305 - 9055.884: 22.6484% ( 278) 00:10:21.833 9055.884 - 9115.462: 24.9531% ( 295) 00:10:21.833 9115.462 - 9175.040: 27.4219% ( 316) 00:10:21.833 9175.040 - 9234.618: 29.9219% ( 320) 00:10:21.833 9234.618 - 9294.196: 32.5156% ( 332) 00:10:21.833 9294.196 - 9353.775: 35.0781% ( 328) 00:10:21.833 9353.775 - 9413.353: 37.6016% ( 323) 00:10:21.833 9413.353 - 9472.931: 40.2734% ( 342) 00:10:21.833 9472.931 - 9532.509: 42.8281% ( 327) 00:10:21.833 9532.509 - 9592.087: 45.4766% ( 339) 00:10:21.833 9592.087 - 9651.665: 48.1328% ( 340) 00:10:21.833 9651.665 - 9711.244: 50.7656% ( 337) 00:10:21.833 9711.244 - 9770.822: 53.3594% ( 332) 00:10:21.833 9770.822 - 9830.400: 55.9141% ( 327) 00:10:21.833 9830.400 - 9889.978: 58.3750% ( 315) 00:10:21.833 9889.978 - 9949.556: 60.7188% ( 300) 00:10:21.833 9949.556 - 10009.135: 62.9531% ( 286) 00:10:21.833 10009.135 - 10068.713: 65.1406% ( 280) 00:10:21.833 10068.713 - 10128.291: 67.3359% ( 281) 00:10:21.833 10128.291 - 10187.869: 69.4219% ( 267) 00:10:21.833 10187.869 - 10247.447: 71.5547% ( 273) 00:10:21.833 10247.447 - 10307.025: 73.4766% ( 246) 00:10:21.833 10307.025 - 10366.604: 75.3750% ( 243) 00:10:21.833 10366.604 - 10426.182: 77.3047% ( 247) 00:10:21.833 10426.182 - 10485.760: 79.1641% ( 238) 00:10:21.833 10485.760 - 10545.338: 81.0469% ( 241) 00:10:21.833 10545.338 - 10604.916: 82.9141% ( 239) 00:10:21.833 10604.916 - 10664.495: 84.6406% ( 221) 00:10:21.833 10664.495 - 10724.073: 86.3125% ( 214) 00:10:21.833 10724.073 - 10783.651: 87.8594% ( 198) 00:10:21.833 10783.651 - 10843.229: 89.1953% ( 171) 00:10:21.833 10843.229 - 10902.807: 90.3672% ( 150) 00:10:21.833 10902.807 - 10962.385: 91.3047% ( 120) 00:10:21.833 10962.385 - 11021.964: 92.0000% ( 89) 00:10:21.833 11021.964 - 11081.542: 92.5781% ( 74) 00:10:21.833 11081.542 - 11141.120: 93.0859% ( 65) 00:10:21.833 11141.120 - 11200.698: 93.4453% ( 46) 00:10:21.833 11200.698 - 11260.276: 93.7344% ( 37) 00:10:21.833 11260.276 - 11319.855: 93.9453% ( 27) 00:10:21.833 11319.855 - 11379.433: 94.1250% ( 23) 00:10:21.833 11379.433 - 11439.011: 94.2891% ( 21) 00:10:21.833 11439.011 - 11498.589: 94.4375% ( 19) 00:10:21.833 11498.589 - 11558.167: 94.5938% ( 20) 00:10:21.833 11558.167 - 11617.745: 94.7578% ( 21) 00:10:21.833 11617.745 - 11677.324: 94.8906% ( 17) 00:10:21.833 11677.324 - 11736.902: 95.0391% ( 19) 00:10:21.833 11736.902 - 11796.480: 95.1797% ( 18) 00:10:21.833 11796.480 - 11856.058: 95.3125% ( 17) 00:10:21.833 11856.058 - 11915.636: 95.4688% ( 20) 00:10:21.833 11915.636 - 11975.215: 95.6172% ( 19) 00:10:21.833 11975.215 - 12034.793: 95.7578% ( 18) 00:10:21.833 12034.793 - 12094.371: 95.9219% ( 21) 00:10:21.833 12094.371 - 12153.949: 96.0625% ( 18) 00:10:21.833 12153.949 - 12213.527: 96.2031% ( 18) 00:10:21.833 12213.527 - 12273.105: 96.3359% ( 17) 00:10:21.833 12273.105 - 12332.684: 96.5000% ( 21) 00:10:21.833 12332.684 - 12392.262: 96.6328% ( 17) 00:10:21.833 12392.262 - 12451.840: 96.7969% ( 21) 00:10:21.833 12451.840 - 12511.418: 96.9453% ( 19) 00:10:21.833 12511.418 - 12570.996: 97.1094% ( 21) 00:10:21.833 12570.996 - 12630.575: 97.2266% ( 15) 00:10:21.833 12630.575 - 12690.153: 97.3594% ( 17) 00:10:21.833 12690.153 - 12749.731: 97.4844% ( 16) 00:10:21.833 12749.731 - 12809.309: 97.5938% ( 14) 00:10:21.833 12809.309 - 12868.887: 97.7031% ( 14) 00:10:21.833 12868.887 - 12928.465: 97.8047% ( 13) 00:10:21.833 12928.465 - 12988.044: 97.9219% ( 15) 00:10:21.833 12988.044 - 13047.622: 98.0234% ( 13) 00:10:21.833 13047.622 - 13107.200: 98.1094% ( 11) 00:10:21.833 13107.200 - 13166.778: 98.2031% ( 12) 00:10:21.833 13166.778 - 13226.356: 98.2812% ( 10) 00:10:21.833 13226.356 - 13285.935: 98.3516% ( 9) 00:10:21.833 13285.935 - 13345.513: 98.4141% ( 8) 00:10:21.833 13345.513 - 13405.091: 98.4609% ( 6) 00:10:21.833 13405.091 - 13464.669: 98.5234% ( 8) 00:10:21.833 13464.669 - 13524.247: 98.5703% ( 6) 00:10:21.833 13524.247 - 13583.825: 98.6172% ( 6) 00:10:21.833 13583.825 - 13643.404: 98.6562% ( 5) 00:10:21.833 13643.404 - 13702.982: 98.6875% ( 4) 00:10:21.833 13702.982 - 13762.560: 98.7188% ( 4) 00:10:21.833 13762.560 - 13822.138: 98.7578% ( 5) 00:10:21.833 13822.138 - 13881.716: 98.7891% ( 4) 00:10:21.833 13881.716 - 13941.295: 98.8281% ( 5) 00:10:21.833 13941.295 - 14000.873: 98.8594% ( 4) 00:10:21.833 14000.873 - 14060.451: 98.8906% ( 4) 00:10:21.833 14060.451 - 14120.029: 98.9297% ( 5) 00:10:21.833 14120.029 - 14179.607: 98.9609% ( 4) 00:10:21.833 14179.607 - 14239.185: 98.9922% ( 4) 00:10:21.833 14239.185 - 14298.764: 99.0000% ( 1) 00:10:21.833 34317.033 - 34555.345: 99.0547% ( 7) 00:10:21.833 34555.345 - 34793.658: 99.0938% ( 5) 00:10:21.833 34793.658 - 35031.971: 99.1484% ( 7) 00:10:21.833 35031.971 - 35270.284: 99.1953% ( 6) 00:10:21.833 35270.284 - 35508.596: 99.2500% ( 7) 00:10:21.833 35508.596 - 35746.909: 99.3047% ( 7) 00:10:21.833 35746.909 - 35985.222: 99.3594% ( 7) 00:10:21.834 35985.222 - 36223.535: 99.4062% ( 6) 00:10:21.834 36223.535 - 36461.847: 99.4609% ( 7) 00:10:21.834 36461.847 - 36700.160: 99.5078% ( 6) 00:10:21.834 36700.160 - 36938.473: 99.5625% ( 7) 00:10:21.834 36938.473 - 37176.785: 99.6172% ( 7) 00:10:21.834 37176.785 - 37415.098: 99.6719% ( 7) 00:10:21.834 37415.098 - 37653.411: 99.7188% ( 6) 00:10:21.834 37653.411 - 37891.724: 99.7734% ( 7) 00:10:21.834 37891.724 - 38130.036: 99.8281% ( 7) 00:10:21.834 38130.036 - 38368.349: 99.8750% ( 6) 00:10:21.834 38368.349 - 38606.662: 99.9297% ( 7) 00:10:21.834 38606.662 - 38844.975: 99.9844% ( 7) 00:10:21.834 38844.975 - 39083.287: 100.0000% ( 2) 00:10:21.834 00:10:21.834 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:21.834 ============================================================================== 00:10:21.834 Range in us Cumulative IO count 00:10:21.834 7536.640 - 7566.429: 0.0155% ( 2) 00:10:21.834 7566.429 - 7596.218: 0.0309% ( 2) 00:10:21.834 7596.218 - 7626.007: 0.0619% ( 4) 00:10:21.834 7626.007 - 7685.585: 0.2011% ( 18) 00:10:21.834 7685.585 - 7745.164: 0.4100% ( 27) 00:10:21.834 7745.164 - 7804.742: 0.7116% ( 39) 00:10:21.834 7804.742 - 7864.320: 1.0829% ( 48) 00:10:21.834 7864.320 - 7923.898: 1.5084% ( 55) 00:10:21.834 7923.898 - 7983.476: 2.0808% ( 74) 00:10:21.834 7983.476 - 8043.055: 2.7073% ( 81) 00:10:21.834 8043.055 - 8102.633: 3.4267% ( 93) 00:10:21.834 8102.633 - 8162.211: 4.1925% ( 99) 00:10:21.834 8162.211 - 8221.789: 5.0588% ( 112) 00:10:21.834 8221.789 - 8281.367: 5.8555% ( 103) 00:10:21.834 8281.367 - 8340.945: 6.6909% ( 108) 00:10:21.834 8340.945 - 8400.524: 7.5186% ( 107) 00:10:21.834 8400.524 - 8460.102: 8.3308% ( 105) 00:10:21.834 8460.102 - 8519.680: 9.2048% ( 113) 00:10:21.834 8519.680 - 8579.258: 10.1021% ( 116) 00:10:21.834 8579.258 - 8638.836: 11.0613% ( 124) 00:10:21.834 8638.836 - 8698.415: 12.1906% ( 146) 00:10:21.834 8698.415 - 8757.993: 13.4978% ( 169) 00:10:21.834 8757.993 - 8817.571: 14.9443% ( 187) 00:10:21.834 8817.571 - 8877.149: 16.6228% ( 217) 00:10:21.834 8877.149 - 8936.727: 18.5102% ( 244) 00:10:21.834 8936.727 - 8996.305: 20.6064% ( 271) 00:10:21.834 8996.305 - 9055.884: 22.8651% ( 292) 00:10:21.834 9055.884 - 9115.462: 25.2785% ( 312) 00:10:21.834 9115.462 - 9175.040: 27.7305% ( 317) 00:10:21.834 9175.040 - 9234.618: 30.3295% ( 336) 00:10:21.834 9234.618 - 9294.196: 32.8744% ( 329) 00:10:21.834 9294.196 - 9353.775: 35.4425% ( 332) 00:10:21.834 9353.775 - 9413.353: 38.0569% ( 338) 00:10:21.834 9413.353 - 9472.931: 40.6250% ( 332) 00:10:21.834 9472.931 - 9532.509: 43.2704% ( 342) 00:10:21.834 9532.509 - 9592.087: 45.8617% ( 335) 00:10:21.834 9592.087 - 9651.665: 48.4839% ( 339) 00:10:21.834 9651.665 - 9711.244: 51.1371% ( 343) 00:10:21.834 9711.244 - 9770.822: 53.7206% ( 334) 00:10:21.834 9770.822 - 9830.400: 56.3119% ( 335) 00:10:21.834 9830.400 - 9889.978: 58.7717% ( 318) 00:10:21.834 9889.978 - 9949.556: 61.0845% ( 299) 00:10:21.834 9949.556 - 10009.135: 63.3818% ( 297) 00:10:21.834 10009.135 - 10068.713: 65.5554% ( 281) 00:10:21.834 10068.713 - 10128.291: 67.7135% ( 279) 00:10:21.834 10128.291 - 10187.869: 69.8252% ( 273) 00:10:21.834 10187.869 - 10247.447: 71.8827% ( 266) 00:10:21.834 10247.447 - 10307.025: 73.9403% ( 266) 00:10:21.834 10307.025 - 10366.604: 75.8509% ( 247) 00:10:21.834 10366.604 - 10426.182: 77.7614% ( 247) 00:10:21.834 10426.182 - 10485.760: 79.6566% ( 245) 00:10:21.834 10485.760 - 10545.338: 81.4511% ( 232) 00:10:21.834 10545.338 - 10604.916: 83.2921% ( 238) 00:10:21.834 10604.916 - 10664.495: 85.0634% ( 229) 00:10:21.834 10664.495 - 10724.073: 86.7265% ( 215) 00:10:21.834 10724.073 - 10783.651: 88.2426% ( 196) 00:10:21.834 10783.651 - 10843.229: 89.6117% ( 177) 00:10:21.834 10843.229 - 10902.807: 90.7024% ( 141) 00:10:21.834 10902.807 - 10962.385: 91.5610% ( 111) 00:10:21.834 10962.385 - 11021.964: 92.2107% ( 84) 00:10:21.834 11021.964 - 11081.542: 92.7290% ( 67) 00:10:21.834 11081.542 - 11141.120: 93.1157% ( 50) 00:10:21.834 11141.120 - 11200.698: 93.4406% ( 42) 00:10:21.834 11200.698 - 11260.276: 93.7732% ( 43) 00:10:21.834 11260.276 - 11319.855: 93.9743% ( 26) 00:10:21.834 11319.855 - 11379.433: 94.1445% ( 22) 00:10:21.834 11379.433 - 11439.011: 94.3301% ( 24) 00:10:21.834 11439.011 - 11498.589: 94.4694% ( 18) 00:10:21.834 11498.589 - 11558.167: 94.6086% ( 18) 00:10:21.834 11558.167 - 11617.745: 94.7169% ( 14) 00:10:21.834 11617.745 - 11677.324: 94.8407% ( 16) 00:10:21.834 11677.324 - 11736.902: 94.9954% ( 20) 00:10:21.834 11736.902 - 11796.480: 95.1655% ( 22) 00:10:21.834 11796.480 - 11856.058: 95.3357% ( 22) 00:10:21.834 11856.058 - 11915.636: 95.4595% ( 16) 00:10:21.834 11915.636 - 11975.215: 95.5678% ( 14) 00:10:21.834 11975.215 - 12034.793: 95.6683% ( 13) 00:10:21.834 12034.793 - 12094.371: 95.7689% ( 13) 00:10:21.834 12094.371 - 12153.949: 95.8694% ( 13) 00:10:21.834 12153.949 - 12213.527: 95.9623% ( 12) 00:10:21.834 12213.527 - 12273.105: 96.0473% ( 11) 00:10:21.834 12273.105 - 12332.684: 96.1479% ( 13) 00:10:21.834 12332.684 - 12392.262: 96.2485% ( 13) 00:10:21.834 12392.262 - 12451.840: 96.3490% ( 13) 00:10:21.834 12451.840 - 12511.418: 96.4496% ( 13) 00:10:21.834 12511.418 - 12570.996: 96.5501% ( 13) 00:10:21.834 12570.996 - 12630.575: 96.6507% ( 13) 00:10:21.834 12630.575 - 12690.153: 96.7512% ( 13) 00:10:21.834 12690.153 - 12749.731: 96.8518% ( 13) 00:10:21.834 12749.731 - 12809.309: 96.9446% ( 12) 00:10:21.834 12809.309 - 12868.887: 97.0529% ( 14) 00:10:21.834 12868.887 - 12928.465: 97.1535% ( 13) 00:10:21.834 12928.465 - 12988.044: 97.2386% ( 11) 00:10:21.834 12988.044 - 13047.622: 97.3468% ( 14) 00:10:21.834 13047.622 - 13107.200: 97.3933% ( 6) 00:10:21.834 13107.200 - 13166.778: 97.4706% ( 10) 00:10:21.834 13166.778 - 13226.356: 97.5557% ( 11) 00:10:21.834 13226.356 - 13285.935: 97.6408% ( 11) 00:10:21.834 13285.935 - 13345.513: 97.7181% ( 10) 00:10:21.834 13345.513 - 13405.091: 97.8032% ( 11) 00:10:21.834 13405.091 - 13464.669: 97.9038% ( 13) 00:10:21.834 13464.669 - 13524.247: 97.9966% ( 12) 00:10:21.834 13524.247 - 13583.825: 98.0739% ( 10) 00:10:21.834 13583.825 - 13643.404: 98.1281% ( 7) 00:10:21.834 13643.404 - 13702.982: 98.1900% ( 8) 00:10:21.834 13702.982 - 13762.560: 98.2441% ( 7) 00:10:21.834 13762.560 - 13822.138: 98.3060% ( 8) 00:10:21.834 13822.138 - 13881.716: 98.3679% ( 8) 00:10:21.834 13881.716 - 13941.295: 98.3988% ( 4) 00:10:21.834 13941.295 - 14000.873: 98.4298% ( 4) 00:10:21.834 14000.873 - 14060.451: 98.4684% ( 5) 00:10:21.834 14060.451 - 14120.029: 98.4994% ( 4) 00:10:21.834 14120.029 - 14179.607: 98.5303% ( 4) 00:10:21.834 14179.607 - 14239.185: 98.5690% ( 5) 00:10:21.834 14239.185 - 14298.764: 98.5999% ( 4) 00:10:21.834 14298.764 - 14358.342: 98.6309% ( 4) 00:10:21.834 14358.342 - 14417.920: 98.6618% ( 4) 00:10:21.834 14417.920 - 14477.498: 98.7005% ( 5) 00:10:21.834 14477.498 - 14537.076: 98.7314% ( 4) 00:10:21.834 14537.076 - 14596.655: 98.7701% ( 5) 00:10:21.834 14596.655 - 14656.233: 98.8011% ( 4) 00:10:21.834 14656.233 - 14715.811: 98.8320% ( 4) 00:10:21.834 14715.811 - 14775.389: 98.8707% ( 5) 00:10:21.834 14775.389 - 14834.967: 98.9016% ( 4) 00:10:21.834 14834.967 - 14894.545: 98.9325% ( 4) 00:10:21.834 14894.545 - 14954.124: 98.9635% ( 4) 00:10:21.834 14954.124 - 15013.702: 98.9944% ( 4) 00:10:21.834 15013.702 - 15073.280: 99.0099% ( 2) 00:10:21.834 21567.302 - 21686.458: 99.0254% ( 2) 00:10:21.834 21686.458 - 21805.615: 99.0486% ( 3) 00:10:21.834 21805.615 - 21924.771: 99.0718% ( 3) 00:10:21.834 21924.771 - 22043.927: 99.0950% ( 3) 00:10:21.834 22043.927 - 22163.084: 99.1105% ( 2) 00:10:21.834 22163.084 - 22282.240: 99.1337% ( 3) 00:10:21.834 22282.240 - 22401.396: 99.1569% ( 3) 00:10:21.834 22401.396 - 22520.553: 99.1801% ( 3) 00:10:21.834 22520.553 - 22639.709: 99.2033% ( 3) 00:10:21.834 22639.709 - 22758.865: 99.2342% ( 4) 00:10:21.834 22758.865 - 22878.022: 99.2574% ( 3) 00:10:21.834 22878.022 - 22997.178: 99.2806% ( 3) 00:10:21.834 22997.178 - 23116.335: 99.3038% ( 3) 00:10:21.834 23116.335 - 23235.491: 99.3270% ( 3) 00:10:21.834 23235.491 - 23354.647: 99.3502% ( 3) 00:10:21.834 23354.647 - 23473.804: 99.3735% ( 3) 00:10:21.834 23473.804 - 23592.960: 99.3967% ( 3) 00:10:21.834 23592.960 - 23712.116: 99.4199% ( 3) 00:10:21.834 23712.116 - 23831.273: 99.4431% ( 3) 00:10:21.834 23831.273 - 23950.429: 99.4663% ( 3) 00:10:21.834 23950.429 - 24069.585: 99.4972% ( 4) 00:10:21.835 24069.585 - 24188.742: 99.5204% ( 3) 00:10:21.835 24188.742 - 24307.898: 99.5436% ( 3) 00:10:21.835 24307.898 - 24427.055: 99.5668% ( 3) 00:10:21.835 24427.055 - 24546.211: 99.5823% ( 2) 00:10:21.835 24546.211 - 24665.367: 99.6055% ( 3) 00:10:21.835 24665.367 - 24784.524: 99.6287% ( 3) 00:10:21.835 24784.524 - 24903.680: 99.6519% ( 3) 00:10:21.835 24903.680 - 25022.836: 99.6751% ( 3) 00:10:21.835 25022.836 - 25141.993: 99.6983% ( 3) 00:10:21.835 25141.993 - 25261.149: 99.7215% ( 3) 00:10:21.835 25261.149 - 25380.305: 99.7447% ( 3) 00:10:21.835 25380.305 - 25499.462: 99.7679% ( 3) 00:10:21.835 25499.462 - 25618.618: 99.7912% ( 3) 00:10:21.835 25618.618 - 25737.775: 99.8144% ( 3) 00:10:21.835 25737.775 - 25856.931: 99.8376% ( 3) 00:10:21.835 25856.931 - 25976.087: 99.8608% ( 3) 00:10:21.835 25976.087 - 26095.244: 99.8762% ( 2) 00:10:21.835 26095.244 - 26214.400: 99.8994% ( 3) 00:10:21.835 26214.400 - 26333.556: 99.9226% ( 3) 00:10:21.835 26333.556 - 26452.713: 99.9459% ( 3) 00:10:21.835 26452.713 - 26571.869: 99.9691% ( 3) 00:10:21.835 26571.869 - 26691.025: 99.9923% ( 3) 00:10:21.835 26691.025 - 26810.182: 100.0000% ( 1) 00:10:21.835 00:10:21.835 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:21.835 ============================================================================== 00:10:21.835 Range in us Cumulative IO count 00:10:21.835 7566.429 - 7596.218: 0.0155% ( 2) 00:10:21.835 7596.218 - 7626.007: 0.0387% ( 3) 00:10:21.835 7626.007 - 7685.585: 0.1624% ( 16) 00:10:21.835 7685.585 - 7745.164: 0.3713% ( 27) 00:10:21.835 7745.164 - 7804.742: 0.6033% ( 30) 00:10:21.835 7804.742 - 7864.320: 0.9282% ( 42) 00:10:21.835 7864.320 - 7923.898: 1.3846% ( 59) 00:10:21.835 7923.898 - 7983.476: 1.9570% ( 74) 00:10:21.835 7983.476 - 8043.055: 2.6377% ( 88) 00:10:21.835 8043.055 - 8102.633: 3.4189% ( 101) 00:10:21.835 8102.633 - 8162.211: 4.1538% ( 95) 00:10:21.835 8162.211 - 8221.789: 4.9505% ( 103) 00:10:21.835 8221.789 - 8281.367: 5.7627% ( 105) 00:10:21.835 8281.367 - 8340.945: 6.5517% ( 102) 00:10:21.835 8340.945 - 8400.524: 7.3484% ( 103) 00:10:21.835 8400.524 - 8460.102: 8.1761% ( 107) 00:10:21.835 8460.102 - 8519.680: 9.0424% ( 112) 00:10:21.835 8519.680 - 8579.258: 9.9474% ( 117) 00:10:21.835 8579.258 - 8638.836: 10.9452% ( 129) 00:10:21.835 8638.836 - 8698.415: 11.9895% ( 135) 00:10:21.835 8698.415 - 8757.993: 13.2658% ( 165) 00:10:21.835 8757.993 - 8817.571: 14.8592% ( 206) 00:10:21.835 8817.571 - 8877.149: 16.6228% ( 228) 00:10:21.835 8877.149 - 8936.727: 18.6417% ( 261) 00:10:21.835 8936.727 - 8996.305: 20.7147% ( 268) 00:10:21.835 8996.305 - 9055.884: 23.0043% ( 296) 00:10:21.835 9055.884 - 9115.462: 25.3713% ( 306) 00:10:21.835 9115.462 - 9175.040: 27.8311% ( 318) 00:10:21.835 9175.040 - 9234.618: 30.3914% ( 331) 00:10:21.835 9234.618 - 9294.196: 32.9981% ( 337) 00:10:21.835 9294.196 - 9353.775: 35.5507% ( 330) 00:10:21.835 9353.775 - 9413.353: 38.1730% ( 339) 00:10:21.835 9413.353 - 9472.931: 40.8338% ( 344) 00:10:21.835 9472.931 - 9532.509: 43.5179% ( 347) 00:10:21.835 9532.509 - 9592.087: 46.1402% ( 339) 00:10:21.835 9592.087 - 9651.665: 48.8088% ( 345) 00:10:21.835 9651.665 - 9711.244: 51.4774% ( 345) 00:10:21.835 9711.244 - 9770.822: 54.0919% ( 338) 00:10:21.835 9770.822 - 9830.400: 56.6445% ( 330) 00:10:21.835 9830.400 - 9889.978: 59.2126% ( 332) 00:10:21.835 9889.978 - 9949.556: 61.6646% ( 317) 00:10:21.835 9949.556 - 10009.135: 64.0238% ( 305) 00:10:21.835 10009.135 - 10068.713: 66.3212% ( 297) 00:10:21.835 10068.713 - 10128.291: 68.5334% ( 286) 00:10:21.835 10128.291 - 10187.869: 70.6451% ( 273) 00:10:21.835 10187.869 - 10247.447: 72.6485% ( 259) 00:10:21.835 10247.447 - 10307.025: 74.5978% ( 252) 00:10:21.835 10307.025 - 10366.604: 76.5006% ( 246) 00:10:21.835 10366.604 - 10426.182: 78.3880% ( 244) 00:10:21.835 10426.182 - 10485.760: 80.2290% ( 238) 00:10:21.835 10485.760 - 10545.338: 82.0777% ( 239) 00:10:21.835 10545.338 - 10604.916: 83.8877% ( 234) 00:10:21.835 10604.916 - 10664.495: 85.7287% ( 238) 00:10:21.835 10664.495 - 10724.073: 87.3917% ( 215) 00:10:21.835 10724.073 - 10783.651: 88.9619% ( 203) 00:10:21.835 10783.651 - 10843.229: 90.3001% ( 173) 00:10:21.835 10843.229 - 10902.807: 91.3598% ( 137) 00:10:21.835 10902.807 - 10962.385: 92.1952% ( 108) 00:10:21.835 10962.385 - 11021.964: 92.7986% ( 78) 00:10:21.835 11021.964 - 11081.542: 93.2782% ( 62) 00:10:21.835 11081.542 - 11141.120: 93.6881% ( 53) 00:10:21.835 11141.120 - 11200.698: 94.0207% ( 43) 00:10:21.835 11200.698 - 11260.276: 94.2992% ( 36) 00:10:21.835 11260.276 - 11319.855: 94.5003% ( 26) 00:10:21.835 11319.855 - 11379.433: 94.6937% ( 25) 00:10:21.835 11379.433 - 11439.011: 94.8407% ( 19) 00:10:21.835 11439.011 - 11498.589: 95.0031% ( 21) 00:10:21.835 11498.589 - 11558.167: 95.1423% ( 18) 00:10:21.835 11558.167 - 11617.745: 95.2893% ( 19) 00:10:21.835 11617.745 - 11677.324: 95.3821% ( 12) 00:10:21.835 11677.324 - 11736.902: 95.4749% ( 12) 00:10:21.835 11736.902 - 11796.480: 95.5832% ( 14) 00:10:21.835 11796.480 - 11856.058: 95.6683% ( 11) 00:10:21.835 11856.058 - 11915.636: 95.7611% ( 12) 00:10:21.835 11915.636 - 11975.215: 95.8540% ( 12) 00:10:21.835 11975.215 - 12034.793: 95.9081% ( 7) 00:10:21.835 12034.793 - 12094.371: 95.9777% ( 9) 00:10:21.835 12094.371 - 12153.949: 96.0473% ( 9) 00:10:21.835 12153.949 - 12213.527: 96.1247% ( 10) 00:10:21.835 12213.527 - 12273.105: 96.1866% ( 8) 00:10:21.835 12273.105 - 12332.684: 96.2717% ( 11) 00:10:21.835 12332.684 - 12392.262: 96.3490% ( 10) 00:10:21.835 12392.262 - 12451.840: 96.4109% ( 8) 00:10:21.835 12451.840 - 12511.418: 96.4882% ( 10) 00:10:21.835 12511.418 - 12570.996: 96.5501% ( 8) 00:10:21.835 12570.996 - 12630.575: 96.5965% ( 6) 00:10:21.835 12630.575 - 12690.153: 96.6429% ( 6) 00:10:21.835 12690.153 - 12749.731: 96.6739% ( 4) 00:10:21.835 12749.731 - 12809.309: 96.6971% ( 3) 00:10:21.835 12809.309 - 12868.887: 96.7358% ( 5) 00:10:21.835 12868.887 - 12928.465: 96.7667% ( 4) 00:10:21.835 12928.465 - 12988.044: 96.8209% ( 7) 00:10:21.835 12988.044 - 13047.622: 96.8673% ( 6) 00:10:21.835 13047.622 - 13107.200: 96.9137% ( 6) 00:10:21.835 13107.200 - 13166.778: 96.9601% ( 6) 00:10:21.835 13166.778 - 13226.356: 97.0065% ( 6) 00:10:21.835 13226.356 - 13285.935: 97.0529% ( 6) 00:10:21.835 13285.935 - 13345.513: 97.1225% ( 9) 00:10:21.835 13345.513 - 13405.091: 97.1999% ( 10) 00:10:21.835 13405.091 - 13464.669: 97.2772% ( 10) 00:10:21.835 13464.669 - 13524.247: 97.3546% ( 10) 00:10:21.835 13524.247 - 13583.825: 97.4319% ( 10) 00:10:21.835 13583.825 - 13643.404: 97.5093% ( 10) 00:10:21.835 13643.404 - 13702.982: 97.5866% ( 10) 00:10:21.835 13702.982 - 13762.560: 97.6640% ( 10) 00:10:21.835 13762.560 - 13822.138: 97.7336% ( 9) 00:10:21.835 13822.138 - 13881.716: 97.8032% ( 9) 00:10:21.835 13881.716 - 13941.295: 97.8496% ( 6) 00:10:21.835 13941.295 - 14000.873: 97.9038% ( 7) 00:10:21.835 14000.873 - 14060.451: 97.9657% ( 8) 00:10:21.835 14060.451 - 14120.029: 98.0198% ( 7) 00:10:21.835 14120.029 - 14179.607: 98.0817% ( 8) 00:10:21.835 14179.607 - 14239.185: 98.1436% ( 8) 00:10:21.835 14239.185 - 14298.764: 98.1977% ( 7) 00:10:21.836 14298.764 - 14358.342: 98.2596% ( 8) 00:10:21.836 14358.342 - 14417.920: 98.3215% ( 8) 00:10:21.836 14417.920 - 14477.498: 98.3756% ( 7) 00:10:21.836 14477.498 - 14537.076: 98.4375% ( 8) 00:10:21.836 14537.076 - 14596.655: 98.4839% ( 6) 00:10:21.836 14596.655 - 14656.233: 98.5458% ( 8) 00:10:21.836 14656.233 - 14715.811: 98.5999% ( 7) 00:10:21.836 14715.811 - 14775.389: 98.6541% ( 7) 00:10:21.836 14775.389 - 14834.967: 98.7160% ( 8) 00:10:21.836 14834.967 - 14894.545: 98.7701% ( 7) 00:10:21.836 14894.545 - 14954.124: 98.8243% ( 7) 00:10:21.836 14954.124 - 15013.702: 98.8784% ( 7) 00:10:21.836 15013.702 - 15073.280: 98.9171% ( 5) 00:10:21.836 15073.280 - 15132.858: 98.9480% ( 4) 00:10:21.836 15132.858 - 15192.436: 98.9790% ( 4) 00:10:21.836 15192.436 - 15252.015: 99.0022% ( 3) 00:10:21.836 15252.015 - 15371.171: 99.0099% ( 1) 00:10:21.836 19779.956 - 19899.113: 99.0331% ( 3) 00:10:21.836 19899.113 - 20018.269: 99.0563% ( 3) 00:10:21.836 20018.269 - 20137.425: 99.0873% ( 4) 00:10:21.836 20137.425 - 20256.582: 99.1105% ( 3) 00:10:21.836 20256.582 - 20375.738: 99.1337% ( 3) 00:10:21.836 20375.738 - 20494.895: 99.1569% ( 3) 00:10:21.836 20494.895 - 20614.051: 99.1801% ( 3) 00:10:21.836 20614.051 - 20733.207: 99.1955% ( 2) 00:10:21.836 20733.207 - 20852.364: 99.2188% ( 3) 00:10:21.836 20852.364 - 20971.520: 99.2420% ( 3) 00:10:21.836 20971.520 - 21090.676: 99.2652% ( 3) 00:10:21.836 21090.676 - 21209.833: 99.2884% ( 3) 00:10:21.836 21209.833 - 21328.989: 99.3116% ( 3) 00:10:21.836 21328.989 - 21448.145: 99.3270% ( 2) 00:10:21.836 21448.145 - 21567.302: 99.3580% ( 4) 00:10:21.836 21567.302 - 21686.458: 99.3812% ( 3) 00:10:21.836 21686.458 - 21805.615: 99.4044% ( 3) 00:10:21.836 21805.615 - 21924.771: 99.4276% ( 3) 00:10:21.836 21924.771 - 22043.927: 99.4508% ( 3) 00:10:21.836 22043.927 - 22163.084: 99.4740% ( 3) 00:10:21.836 22163.084 - 22282.240: 99.4972% ( 3) 00:10:21.836 22282.240 - 22401.396: 99.5127% ( 2) 00:10:21.836 22401.396 - 22520.553: 99.5436% ( 4) 00:10:21.836 22520.553 - 22639.709: 99.5668% ( 3) 00:10:21.836 22639.709 - 22758.865: 99.5823% ( 2) 00:10:21.836 22758.865 - 22878.022: 99.6132% ( 4) 00:10:21.836 22878.022 - 22997.178: 99.6364% ( 3) 00:10:21.836 22997.178 - 23116.335: 99.6597% ( 3) 00:10:21.836 23116.335 - 23235.491: 99.6829% ( 3) 00:10:21.836 23235.491 - 23354.647: 99.7061% ( 3) 00:10:21.836 23354.647 - 23473.804: 99.7293% ( 3) 00:10:21.836 23473.804 - 23592.960: 99.7525% ( 3) 00:10:21.836 23592.960 - 23712.116: 99.7757% ( 3) 00:10:21.836 23712.116 - 23831.273: 99.7989% ( 3) 00:10:21.836 23831.273 - 23950.429: 99.8221% ( 3) 00:10:21.836 23950.429 - 24069.585: 99.8453% ( 3) 00:10:21.836 24069.585 - 24188.742: 99.8685% ( 3) 00:10:21.836 24188.742 - 24307.898: 99.8917% ( 3) 00:10:21.836 24307.898 - 24427.055: 99.9149% ( 3) 00:10:21.836 24427.055 - 24546.211: 99.9381% ( 3) 00:10:21.836 24546.211 - 24665.367: 99.9536% ( 2) 00:10:21.836 24665.367 - 24784.524: 99.9768% ( 3) 00:10:21.836 24784.524 - 24903.680: 100.0000% ( 3) 00:10:21.836 00:10:21.836 20:56:42 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:23.212 Initializing NVMe Controllers 00:10:23.212 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:23.212 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:23.212 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:23.212 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:23.212 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:23.212 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:23.212 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:23.212 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:23.212 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:23.212 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:23.212 Initialization complete. Launching workers. 00:10:23.212 ======================================================== 00:10:23.212 Latency(us) 00:10:23.212 Device Information : IOPS MiB/s Average min max 00:10:23.212 PCIE (0000:00:06.0) NSID 1 from core 0: 10825.35 126.86 11825.96 8980.76 29138.62 00:10:23.212 PCIE (0000:00:07.0) NSID 1 from core 0: 10825.35 126.86 11826.70 9077.60 28573.91 00:10:23.212 PCIE (0000:00:09.0) NSID 1 from core 0: 10825.35 126.86 11824.62 9190.25 29265.56 00:10:23.212 PCIE (0000:00:08.0) NSID 1 from core 0: 10825.35 126.86 11823.42 9111.86 29043.21 00:10:23.212 PCIE (0000:00:08.0) NSID 2 from core 0: 10825.35 126.86 11820.77 9424.80 28813.44 00:10:23.212 PCIE (0000:00:08.0) NSID 3 from core 0: 10825.35 126.86 11818.27 9263.28 28347.56 00:10:23.212 ======================================================== 00:10:23.212 Total : 64952.12 761.16 11823.29 8980.76 29265.56 00:10:23.212 00:10:23.212 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:23.212 ================================================================================= 00:10:23.212 1.00000% : 9532.509us 00:10:23.212 10.00000% : 10247.447us 00:10:23.212 25.00000% : 10902.807us 00:10:23.212 50.00000% : 11677.324us 00:10:23.212 75.00000% : 12451.840us 00:10:23.212 90.00000% : 13226.356us 00:10:23.212 95.00000% : 13643.404us 00:10:23.212 98.00000% : 14358.342us 00:10:23.212 99.00000% : 25737.775us 00:10:23.212 99.50000% : 27525.120us 00:10:23.212 99.90000% : 28835.840us 00:10:23.212 99.99000% : 29193.309us 00:10:23.212 99.99900% : 29193.309us 00:10:23.212 99.99990% : 29193.309us 00:10:23.212 99.99999% : 29193.309us 00:10:23.212 00:10:23.212 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:23.212 ================================================================================= 00:10:23.212 1.00000% : 9592.087us 00:10:23.212 10.00000% : 10366.604us 00:10:23.212 25.00000% : 10962.385us 00:10:23.212 50.00000% : 11677.324us 00:10:23.212 75.00000% : 12392.262us 00:10:23.212 90.00000% : 13107.200us 00:10:23.212 95.00000% : 13524.247us 00:10:23.212 98.00000% : 14239.185us 00:10:23.212 99.00000% : 25261.149us 00:10:23.212 99.50000% : 27048.495us 00:10:23.212 99.90000% : 28359.215us 00:10:23.212 99.99000% : 28597.527us 00:10:23.212 99.99900% : 28597.527us 00:10:23.212 99.99990% : 28597.527us 00:10:23.212 99.99999% : 28597.527us 00:10:23.212 00:10:23.212 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:23.212 ================================================================================= 00:10:23.212 1.00000% : 9592.087us 00:10:23.212 10.00000% : 10366.604us 00:10:23.212 25.00000% : 10962.385us 00:10:23.212 50.00000% : 11677.324us 00:10:23.212 75.00000% : 12392.262us 00:10:23.212 90.00000% : 13107.200us 00:10:23.212 95.00000% : 13464.669us 00:10:23.212 98.00000% : 14000.873us 00:10:23.212 99.00000% : 26095.244us 00:10:23.212 99.50000% : 27763.433us 00:10:23.212 99.90000% : 29074.153us 00:10:23.212 99.99000% : 29312.465us 00:10:23.212 99.99900% : 29312.465us 00:10:23.212 99.99990% : 29312.465us 00:10:23.212 99.99999% : 29312.465us 00:10:23.212 00:10:23.212 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:23.212 ================================================================================= 00:10:23.212 1.00000% : 9651.665us 00:10:23.212 10.00000% : 10366.604us 00:10:23.212 25.00000% : 10962.385us 00:10:23.212 50.00000% : 11677.324us 00:10:23.212 75.00000% : 12392.262us 00:10:23.212 90.00000% : 13107.200us 00:10:23.212 95.00000% : 13464.669us 00:10:23.212 98.00000% : 13941.295us 00:10:23.212 99.00000% : 25737.775us 00:10:23.212 99.50000% : 27405.964us 00:10:23.212 99.90000% : 28835.840us 00:10:23.212 99.99000% : 29074.153us 00:10:23.212 99.99900% : 29074.153us 00:10:23.212 99.99990% : 29074.153us 00:10:23.212 99.99999% : 29074.153us 00:10:23.212 00:10:23.212 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:23.212 ================================================================================= 00:10:23.212 1.00000% : 9651.665us 00:10:23.212 10.00000% : 10366.604us 00:10:23.212 25.00000% : 10962.385us 00:10:23.212 50.00000% : 11677.324us 00:10:23.212 75.00000% : 12392.262us 00:10:23.212 90.00000% : 13107.200us 00:10:23.212 95.00000% : 13464.669us 00:10:23.212 98.00000% : 14000.873us 00:10:23.212 99.00000% : 25976.087us 00:10:23.212 99.50000% : 27286.807us 00:10:23.212 99.90000% : 28597.527us 00:10:23.212 99.99000% : 28835.840us 00:10:23.212 99.99900% : 28835.840us 00:10:23.212 99.99990% : 28835.840us 00:10:23.212 99.99999% : 28835.840us 00:10:23.212 00:10:23.212 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:23.213 ================================================================================= 00:10:23.213 1.00000% : 9711.244us 00:10:23.213 10.00000% : 10366.604us 00:10:23.213 25.00000% : 10962.385us 00:10:23.213 50.00000% : 11677.324us 00:10:23.213 75.00000% : 12392.262us 00:10:23.213 90.00000% : 13047.622us 00:10:23.213 95.00000% : 13464.669us 00:10:23.213 98.00000% : 14179.607us 00:10:23.213 99.00000% : 25380.305us 00:10:23.213 99.50000% : 26929.338us 00:10:23.213 99.90000% : 28120.902us 00:10:23.213 99.99000% : 28359.215us 00:10:23.213 99.99900% : 28359.215us 00:10:23.213 99.99990% : 28359.215us 00:10:23.213 99.99999% : 28359.215us 00:10:23.213 00:10:23.213 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:23.213 ============================================================================== 00:10:23.213 Range in us Cumulative IO count 00:10:23.213 8936.727 - 8996.305: 0.0092% ( 1) 00:10:23.213 8996.305 - 9055.884: 0.0368% ( 3) 00:10:23.213 9055.884 - 9115.462: 0.1379% ( 11) 00:10:23.213 9115.462 - 9175.040: 0.2665% ( 14) 00:10:23.213 9175.040 - 9234.618: 0.4136% ( 16) 00:10:23.213 9234.618 - 9294.196: 0.5699% ( 17) 00:10:23.213 9294.196 - 9353.775: 0.6985% ( 14) 00:10:23.213 9353.775 - 9413.353: 0.8088% ( 12) 00:10:23.213 9413.353 - 9472.931: 0.9926% ( 20) 00:10:23.213 9472.931 - 9532.509: 1.2316% ( 26) 00:10:23.213 9532.509 - 9592.087: 1.6636% ( 47) 00:10:23.213 9592.087 - 9651.665: 2.1048% ( 48) 00:10:23.213 9651.665 - 9711.244: 2.6103% ( 55) 00:10:23.213 9711.244 - 9770.822: 3.2353% ( 68) 00:10:23.213 9770.822 - 9830.400: 4.1360% ( 98) 00:10:23.213 9830.400 - 9889.978: 4.9632% ( 90) 00:10:23.213 9889.978 - 9949.556: 5.9099% ( 103) 00:10:23.213 9949.556 - 10009.135: 6.7004% ( 86) 00:10:23.213 10009.135 - 10068.713: 7.4724% ( 84) 00:10:23.213 10068.713 - 10128.291: 8.6121% ( 124) 00:10:23.213 10128.291 - 10187.869: 9.7335% ( 122) 00:10:23.213 10187.869 - 10247.447: 10.9191% ( 129) 00:10:23.213 10247.447 - 10307.025: 12.2151% ( 141) 00:10:23.213 10307.025 - 10366.604: 13.2812% ( 116) 00:10:23.213 10366.604 - 10426.182: 14.4393% ( 126) 00:10:23.213 10426.182 - 10485.760: 15.5607% ( 122) 00:10:23.213 10485.760 - 10545.338: 16.8842% ( 144) 00:10:23.213 10545.338 - 10604.916: 18.2353% ( 147) 00:10:23.213 10604.916 - 10664.495: 19.8070% ( 171) 00:10:23.213 10664.495 - 10724.073: 21.4062% ( 174) 00:10:23.213 10724.073 - 10783.651: 22.9871% ( 172) 00:10:23.213 10783.651 - 10843.229: 24.7978% ( 197) 00:10:23.213 10843.229 - 10902.807: 26.7188% ( 209) 00:10:23.213 10902.807 - 10962.385: 28.6673% ( 212) 00:10:23.213 10962.385 - 11021.964: 30.6066% ( 211) 00:10:23.213 11021.964 - 11081.542: 32.3897% ( 194) 00:10:23.213 11081.542 - 11141.120: 34.2371% ( 201) 00:10:23.213 11141.120 - 11200.698: 36.2040% ( 214) 00:10:23.213 11200.698 - 11260.276: 38.0239% ( 198) 00:10:23.213 11260.276 - 11319.855: 39.9724% ( 212) 00:10:23.213 11319.855 - 11379.433: 41.9118% ( 211) 00:10:23.213 11379.433 - 11439.011: 43.8787% ( 214) 00:10:23.213 11439.011 - 11498.589: 45.7445% ( 203) 00:10:23.213 11498.589 - 11558.167: 47.7298% ( 216) 00:10:23.213 11558.167 - 11617.745: 49.5772% ( 201) 00:10:23.213 11617.745 - 11677.324: 51.6176% ( 222) 00:10:23.213 11677.324 - 11736.902: 53.5754% ( 213) 00:10:23.213 11736.902 - 11796.480: 55.4963% ( 209) 00:10:23.213 11796.480 - 11856.058: 57.5276% ( 221) 00:10:23.213 11856.058 - 11915.636: 59.3290% ( 196) 00:10:23.213 11915.636 - 11975.215: 61.2776% ( 212) 00:10:23.213 11975.215 - 12034.793: 63.1342% ( 202) 00:10:23.213 12034.793 - 12094.371: 65.0184% ( 205) 00:10:23.213 12094.371 - 12153.949: 66.9026% ( 205) 00:10:23.213 12153.949 - 12213.527: 68.7040% ( 196) 00:10:23.213 12213.527 - 12273.105: 70.4963% ( 195) 00:10:23.213 12273.105 - 12332.684: 72.1415% ( 179) 00:10:23.213 12332.684 - 12392.262: 73.8511% ( 186) 00:10:23.213 12392.262 - 12451.840: 75.4136% ( 170) 00:10:23.213 12451.840 - 12511.418: 76.9210% ( 164) 00:10:23.213 12511.418 - 12570.996: 78.4191% ( 163) 00:10:23.213 12570.996 - 12630.575: 79.6967% ( 139) 00:10:23.213 12630.575 - 12690.153: 81.0018% ( 142) 00:10:23.213 12690.153 - 12749.731: 82.1875% ( 129) 00:10:23.213 12749.731 - 12809.309: 83.3732% ( 129) 00:10:23.213 12809.309 - 12868.887: 84.5496% ( 128) 00:10:23.213 12868.887 - 12928.465: 85.4963% ( 103) 00:10:23.213 12928.465 - 12988.044: 86.5349% ( 113) 00:10:23.213 12988.044 - 13047.622: 87.4357% ( 98) 00:10:23.213 13047.622 - 13107.200: 88.3456% ( 99) 00:10:23.213 13107.200 - 13166.778: 89.3015% ( 104) 00:10:23.213 13166.778 - 13226.356: 90.1838% ( 96) 00:10:23.213 13226.356 - 13285.935: 91.1029% ( 100) 00:10:23.213 13285.935 - 13345.513: 91.9026% ( 87) 00:10:23.213 13345.513 - 13405.091: 92.7206% ( 89) 00:10:23.213 13405.091 - 13464.669: 93.4099% ( 75) 00:10:23.213 13464.669 - 13524.247: 94.0901% ( 74) 00:10:23.213 13524.247 - 13583.825: 94.7059% ( 67) 00:10:23.213 13583.825 - 13643.404: 95.2114% ( 55) 00:10:23.213 13643.404 - 13702.982: 95.6801% ( 51) 00:10:23.213 13702.982 - 13762.560: 96.0294% ( 38) 00:10:23.213 13762.560 - 13822.138: 96.3235% ( 32) 00:10:23.213 13822.138 - 13881.716: 96.6360% ( 34) 00:10:23.213 13881.716 - 13941.295: 96.8842% ( 27) 00:10:23.213 13941.295 - 14000.873: 97.1324% ( 27) 00:10:23.213 14000.873 - 14060.451: 97.3438% ( 23) 00:10:23.213 14060.451 - 14120.029: 97.5000% ( 17) 00:10:23.213 14120.029 - 14179.607: 97.7022% ( 22) 00:10:23.213 14179.607 - 14239.185: 97.8309% ( 14) 00:10:23.213 14239.185 - 14298.764: 97.9320% ( 11) 00:10:23.213 14298.764 - 14358.342: 98.0790% ( 16) 00:10:23.213 14358.342 - 14417.920: 98.1801% ( 11) 00:10:23.213 14417.920 - 14477.498: 98.2629% ( 9) 00:10:23.213 14477.498 - 14537.076: 98.3915% ( 14) 00:10:23.213 14537.076 - 14596.655: 98.4743% ( 9) 00:10:23.213 14596.655 - 14656.233: 98.5570% ( 9) 00:10:23.213 14656.233 - 14715.811: 98.6029% ( 5) 00:10:23.213 14715.811 - 14775.389: 98.6673% ( 7) 00:10:23.213 14775.389 - 14834.967: 98.7224% ( 6) 00:10:23.213 14834.967 - 14894.545: 98.7408% ( 2) 00:10:23.213 14894.545 - 14954.124: 98.7684% ( 3) 00:10:23.213 14954.124 - 15013.702: 98.7960% ( 3) 00:10:23.213 15013.702 - 15073.280: 98.8143% ( 2) 00:10:23.213 15073.280 - 15132.858: 98.8235% ( 1) 00:10:23.213 25261.149 - 25380.305: 98.8511% ( 3) 00:10:23.213 25380.305 - 25499.462: 98.9154% ( 7) 00:10:23.213 25499.462 - 25618.618: 98.9890% ( 8) 00:10:23.213 25618.618 - 25737.775: 99.0441% ( 6) 00:10:23.213 25737.775 - 25856.931: 99.1176% ( 8) 00:10:23.213 25856.931 - 25976.087: 99.1360% ( 2) 00:10:23.213 26095.244 - 26214.400: 99.1452% ( 1) 00:10:23.213 26214.400 - 26333.556: 99.1728% ( 3) 00:10:23.213 26333.556 - 26452.713: 99.2096% ( 4) 00:10:23.213 26452.713 - 26571.869: 99.2463% ( 4) 00:10:23.213 26571.869 - 26691.025: 99.2831% ( 4) 00:10:23.213 26691.025 - 26810.182: 99.3199% ( 4) 00:10:23.213 26810.182 - 26929.338: 99.3566% ( 4) 00:10:23.213 26929.338 - 27048.495: 99.3934% ( 4) 00:10:23.213 27048.495 - 27167.651: 99.4301% ( 4) 00:10:23.213 27167.651 - 27286.807: 99.4669% ( 4) 00:10:23.213 27286.807 - 27405.964: 99.4853% ( 2) 00:10:23.213 27405.964 - 27525.120: 99.5221% ( 4) 00:10:23.213 27525.120 - 27644.276: 99.5496% ( 3) 00:10:23.213 27644.276 - 27763.433: 99.5956% ( 5) 00:10:23.213 27763.433 - 27882.589: 99.6232% ( 3) 00:10:23.213 27882.589 - 28001.745: 99.6599% ( 4) 00:10:23.213 28001.745 - 28120.902: 99.7059% ( 5) 00:10:23.213 28120.902 - 28240.058: 99.7243% ( 2) 00:10:23.213 28240.058 - 28359.215: 99.7702% ( 5) 00:10:23.213 28359.215 - 28478.371: 99.7978% ( 3) 00:10:23.213 28478.371 - 28597.527: 99.8346% ( 4) 00:10:23.213 28597.527 - 28716.684: 99.8713% ( 4) 00:10:23.213 28716.684 - 28835.840: 99.9081% ( 4) 00:10:23.213 28835.840 - 28954.996: 99.9357% ( 3) 00:10:23.213 28954.996 - 29074.153: 99.9816% ( 5) 00:10:23.213 29074.153 - 29193.309: 100.0000% ( 2) 00:10:23.213 00:10:23.213 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:23.213 ============================================================================== 00:10:23.213 Range in us Cumulative IO count 00:10:23.213 9055.884 - 9115.462: 0.0092% ( 1) 00:10:23.213 9234.618 - 9294.196: 0.0460% ( 4) 00:10:23.213 9294.196 - 9353.775: 0.1471% ( 11) 00:10:23.213 9353.775 - 9413.353: 0.3125% ( 18) 00:10:23.213 9413.353 - 9472.931: 0.5882% ( 30) 00:10:23.213 9472.931 - 9532.509: 0.8640% ( 30) 00:10:23.213 9532.509 - 9592.087: 1.0478% ( 20) 00:10:23.213 9592.087 - 9651.665: 1.3235% ( 30) 00:10:23.213 9651.665 - 9711.244: 1.5809% ( 28) 00:10:23.213 9711.244 - 9770.822: 1.8842% ( 33) 00:10:23.213 9770.822 - 9830.400: 2.2426% ( 39) 00:10:23.213 9830.400 - 9889.978: 2.9228% ( 74) 00:10:23.213 9889.978 - 9949.556: 3.6029% ( 74) 00:10:23.213 9949.556 - 10009.135: 4.4118% ( 88) 00:10:23.213 10009.135 - 10068.713: 5.4136% ( 109) 00:10:23.213 10068.713 - 10128.291: 6.4062% ( 108) 00:10:23.213 10128.291 - 10187.869: 7.4265% ( 111) 00:10:23.213 10187.869 - 10247.447: 8.4007% ( 106) 00:10:23.213 10247.447 - 10307.025: 9.5956% ( 130) 00:10:23.213 10307.025 - 10366.604: 10.9191% ( 144) 00:10:23.213 10366.604 - 10426.182: 12.2518% ( 145) 00:10:23.213 10426.182 - 10485.760: 13.6121% ( 148) 00:10:23.213 10485.760 - 10545.338: 15.0092% ( 152) 00:10:23.213 10545.338 - 10604.916: 16.4246% ( 154) 00:10:23.213 10604.916 - 10664.495: 17.8309% ( 153) 00:10:23.213 10664.495 - 10724.073: 19.2647% ( 156) 00:10:23.213 10724.073 - 10783.651: 20.8272% ( 170) 00:10:23.213 10783.651 - 10843.229: 22.3713% ( 168) 00:10:23.213 10843.229 - 10902.807: 24.0441% ( 182) 00:10:23.213 10902.807 - 10962.385: 25.7721% ( 188) 00:10:23.214 10962.385 - 11021.964: 27.6103% ( 200) 00:10:23.214 11021.964 - 11081.542: 29.6599% ( 223) 00:10:23.214 11081.542 - 11141.120: 31.7371% ( 226) 00:10:23.214 11141.120 - 11200.698: 33.8419% ( 229) 00:10:23.214 11200.698 - 11260.276: 35.9926% ( 234) 00:10:23.214 11260.276 - 11319.855: 38.1985% ( 240) 00:10:23.214 11319.855 - 11379.433: 40.3952% ( 239) 00:10:23.214 11379.433 - 11439.011: 42.6011% ( 240) 00:10:23.214 11439.011 - 11498.589: 44.8805% ( 248) 00:10:23.214 11498.589 - 11558.167: 47.0588% ( 237) 00:10:23.214 11558.167 - 11617.745: 49.3658% ( 251) 00:10:23.214 11617.745 - 11677.324: 51.5717% ( 240) 00:10:23.214 11677.324 - 11736.902: 53.7960% ( 242) 00:10:23.214 11736.902 - 11796.480: 55.9651% ( 236) 00:10:23.214 11796.480 - 11856.058: 58.2077% ( 244) 00:10:23.214 11856.058 - 11915.636: 60.1930% ( 216) 00:10:23.214 11915.636 - 11975.215: 62.2426% ( 223) 00:10:23.214 11975.215 - 12034.793: 64.3015% ( 224) 00:10:23.214 12034.793 - 12094.371: 66.4062% ( 229) 00:10:23.214 12094.371 - 12153.949: 68.4835% ( 226) 00:10:23.214 12153.949 - 12213.527: 70.4320% ( 212) 00:10:23.214 12213.527 - 12273.105: 72.3621% ( 210) 00:10:23.214 12273.105 - 12332.684: 74.0717% ( 186) 00:10:23.214 12332.684 - 12392.262: 75.7721% ( 185) 00:10:23.214 12392.262 - 12451.840: 77.3346% ( 170) 00:10:23.214 12451.840 - 12511.418: 78.8695% ( 167) 00:10:23.214 12511.418 - 12570.996: 80.2757% ( 153) 00:10:23.214 12570.996 - 12630.575: 81.7188% ( 157) 00:10:23.214 12630.575 - 12690.153: 83.0882% ( 149) 00:10:23.214 12690.153 - 12749.731: 84.3290% ( 135) 00:10:23.214 12749.731 - 12809.309: 85.5331% ( 131) 00:10:23.214 12809.309 - 12868.887: 86.6268% ( 119) 00:10:23.214 12868.887 - 12928.465: 87.7574% ( 123) 00:10:23.214 12928.465 - 12988.044: 88.7776% ( 111) 00:10:23.214 12988.044 - 13047.622: 89.8346% ( 115) 00:10:23.214 13047.622 - 13107.200: 90.7904% ( 104) 00:10:23.214 13107.200 - 13166.778: 91.5993% ( 88) 00:10:23.214 13166.778 - 13226.356: 92.3621% ( 83) 00:10:23.214 13226.356 - 13285.935: 93.0699% ( 77) 00:10:23.214 13285.935 - 13345.513: 93.6857% ( 67) 00:10:23.214 13345.513 - 13405.091: 94.2555% ( 62) 00:10:23.214 13405.091 - 13464.669: 94.6967% ( 48) 00:10:23.214 13464.669 - 13524.247: 95.1011% ( 44) 00:10:23.214 13524.247 - 13583.825: 95.5055% ( 44) 00:10:23.214 13583.825 - 13643.404: 95.9007% ( 43) 00:10:23.214 13643.404 - 13702.982: 96.2132% ( 34) 00:10:23.214 13702.982 - 13762.560: 96.5441% ( 36) 00:10:23.214 13762.560 - 13822.138: 96.8015% ( 28) 00:10:23.214 13822.138 - 13881.716: 97.0496% ( 27) 00:10:23.214 13881.716 - 13941.295: 97.2610% ( 23) 00:10:23.214 13941.295 - 14000.873: 97.4632% ( 22) 00:10:23.214 14000.873 - 14060.451: 97.6379% ( 19) 00:10:23.214 14060.451 - 14120.029: 97.8217% ( 20) 00:10:23.214 14120.029 - 14179.607: 97.9963% ( 19) 00:10:23.214 14179.607 - 14239.185: 98.1434% ( 16) 00:10:23.214 14239.185 - 14298.764: 98.2629% ( 13) 00:10:23.214 14298.764 - 14358.342: 98.3640% ( 11) 00:10:23.214 14358.342 - 14417.920: 98.4559% ( 10) 00:10:23.214 14417.920 - 14477.498: 98.5386% ( 9) 00:10:23.214 14477.498 - 14537.076: 98.5938% ( 6) 00:10:23.214 14537.076 - 14596.655: 98.6397% ( 5) 00:10:23.214 14596.655 - 14656.233: 98.6857% ( 5) 00:10:23.214 14656.233 - 14715.811: 98.7316% ( 5) 00:10:23.214 14715.811 - 14775.389: 98.7500% ( 2) 00:10:23.214 14775.389 - 14834.967: 98.7684% ( 2) 00:10:23.214 14834.967 - 14894.545: 98.7868% ( 2) 00:10:23.214 14894.545 - 14954.124: 98.8051% ( 2) 00:10:23.214 14954.124 - 15013.702: 98.8235% ( 2) 00:10:23.214 24546.211 - 24665.367: 98.8327% ( 1) 00:10:23.214 24665.367 - 24784.524: 98.8695% ( 4) 00:10:23.214 24784.524 - 24903.680: 98.9062% ( 4) 00:10:23.214 24903.680 - 25022.836: 98.9430% ( 4) 00:10:23.214 25022.836 - 25141.993: 98.9798% ( 4) 00:10:23.214 25141.993 - 25261.149: 99.0165% ( 4) 00:10:23.214 25261.149 - 25380.305: 99.0441% ( 3) 00:10:23.214 25380.305 - 25499.462: 99.0809% ( 4) 00:10:23.214 25499.462 - 25618.618: 99.1176% ( 4) 00:10:23.214 25618.618 - 25737.775: 99.1544% ( 4) 00:10:23.214 25737.775 - 25856.931: 99.1728% ( 2) 00:10:23.214 25856.931 - 25976.087: 99.2096% ( 4) 00:10:23.214 25976.087 - 26095.244: 99.2463% ( 4) 00:10:23.214 26095.244 - 26214.400: 99.2831% ( 4) 00:10:23.214 26214.400 - 26333.556: 99.3199% ( 4) 00:10:23.214 26333.556 - 26452.713: 99.3566% ( 4) 00:10:23.214 26452.713 - 26571.869: 99.3842% ( 3) 00:10:23.214 26571.869 - 26691.025: 99.4210% ( 4) 00:10:23.214 26691.025 - 26810.182: 99.4577% ( 4) 00:10:23.214 26810.182 - 26929.338: 99.4853% ( 3) 00:10:23.214 26929.338 - 27048.495: 99.5221% ( 4) 00:10:23.214 27048.495 - 27167.651: 99.5588% ( 4) 00:10:23.214 27167.651 - 27286.807: 99.5956% ( 4) 00:10:23.214 27286.807 - 27405.964: 99.6324% ( 4) 00:10:23.214 27405.964 - 27525.120: 99.6691% ( 4) 00:10:23.214 27525.120 - 27644.276: 99.7059% ( 4) 00:10:23.214 27644.276 - 27763.433: 99.7426% ( 4) 00:10:23.214 27763.433 - 27882.589: 99.7794% ( 4) 00:10:23.214 27882.589 - 28001.745: 99.8162% ( 4) 00:10:23.214 28001.745 - 28120.902: 99.8529% ( 4) 00:10:23.214 28120.902 - 28240.058: 99.8897% ( 4) 00:10:23.214 28240.058 - 28359.215: 99.9265% ( 4) 00:10:23.214 28359.215 - 28478.371: 99.9632% ( 4) 00:10:23.214 28478.371 - 28597.527: 100.0000% ( 4) 00:10:23.214 00:10:23.214 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:23.214 ============================================================================== 00:10:23.214 Range in us Cumulative IO count 00:10:23.214 9175.040 - 9234.618: 0.0460% ( 5) 00:10:23.214 9234.618 - 9294.196: 0.1654% ( 13) 00:10:23.214 9294.196 - 9353.775: 0.2941% ( 14) 00:10:23.214 9353.775 - 9413.353: 0.4504% ( 17) 00:10:23.214 9413.353 - 9472.931: 0.6250% ( 19) 00:10:23.214 9472.931 - 9532.509: 0.9559% ( 36) 00:10:23.214 9532.509 - 9592.087: 1.2684% ( 34) 00:10:23.214 9592.087 - 9651.665: 1.5349% ( 29) 00:10:23.214 9651.665 - 9711.244: 1.8199% ( 31) 00:10:23.214 9711.244 - 9770.822: 2.2426% ( 46) 00:10:23.214 9770.822 - 9830.400: 2.7757% ( 58) 00:10:23.214 9830.400 - 9889.978: 3.4375% ( 72) 00:10:23.214 9889.978 - 9949.556: 4.2371% ( 87) 00:10:23.214 9949.556 - 10009.135: 5.0000% ( 83) 00:10:23.214 10009.135 - 10068.713: 5.8364% ( 91) 00:10:23.214 10068.713 - 10128.291: 6.7096% ( 95) 00:10:23.214 10128.291 - 10187.869: 7.7114% ( 109) 00:10:23.214 10187.869 - 10247.447: 8.8235% ( 121) 00:10:23.214 10247.447 - 10307.025: 9.8989% ( 117) 00:10:23.214 10307.025 - 10366.604: 11.0294% ( 123) 00:10:23.214 10366.604 - 10426.182: 12.1875% ( 126) 00:10:23.214 10426.182 - 10485.760: 13.3364% ( 125) 00:10:23.214 10485.760 - 10545.338: 14.8254% ( 162) 00:10:23.214 10545.338 - 10604.916: 16.3327% ( 164) 00:10:23.214 10604.916 - 10664.495: 17.8676% ( 167) 00:10:23.214 10664.495 - 10724.073: 19.4853% ( 176) 00:10:23.214 10724.073 - 10783.651: 21.0110% ( 166) 00:10:23.214 10783.651 - 10843.229: 22.6011% ( 173) 00:10:23.214 10843.229 - 10902.807: 24.3842% ( 194) 00:10:23.214 10902.807 - 10962.385: 26.0110% ( 177) 00:10:23.214 10962.385 - 11021.964: 27.9504% ( 211) 00:10:23.214 11021.964 - 11081.542: 29.8346% ( 205) 00:10:23.214 11081.542 - 11141.120: 31.7647% ( 210) 00:10:23.214 11141.120 - 11200.698: 33.7224% ( 213) 00:10:23.214 11200.698 - 11260.276: 35.7996% ( 226) 00:10:23.214 11260.276 - 11319.855: 38.1526% ( 256) 00:10:23.214 11319.855 - 11379.433: 40.4688% ( 252) 00:10:23.214 11379.433 - 11439.011: 42.7390% ( 247) 00:10:23.214 11439.011 - 11498.589: 45.0460% ( 251) 00:10:23.214 11498.589 - 11558.167: 47.2978% ( 245) 00:10:23.214 11558.167 - 11617.745: 49.4945% ( 239) 00:10:23.214 11617.745 - 11677.324: 51.8199% ( 253) 00:10:23.214 11677.324 - 11736.902: 53.9982% ( 237) 00:10:23.214 11736.902 - 11796.480: 56.2224% ( 242) 00:10:23.214 11796.480 - 11856.058: 58.3272% ( 229) 00:10:23.214 11856.058 - 11915.636: 60.3125% ( 216) 00:10:23.214 11915.636 - 11975.215: 62.4265% ( 230) 00:10:23.214 11975.215 - 12034.793: 64.4118% ( 216) 00:10:23.214 12034.793 - 12094.371: 66.4798% ( 225) 00:10:23.214 12094.371 - 12153.949: 68.4926% ( 219) 00:10:23.214 12153.949 - 12213.527: 70.4688% ( 215) 00:10:23.214 12213.527 - 12273.105: 72.2335% ( 192) 00:10:23.214 12273.105 - 12332.684: 73.9522% ( 187) 00:10:23.214 12332.684 - 12392.262: 75.5974% ( 179) 00:10:23.214 12392.262 - 12451.840: 77.1324% ( 167) 00:10:23.214 12451.840 - 12511.418: 78.6489% ( 165) 00:10:23.214 12511.418 - 12570.996: 80.0735% ( 155) 00:10:23.214 12570.996 - 12630.575: 81.4614% ( 151) 00:10:23.214 12630.575 - 12690.153: 82.8309% ( 149) 00:10:23.214 12690.153 - 12749.731: 84.1360% ( 142) 00:10:23.214 12749.731 - 12809.309: 85.3309% ( 130) 00:10:23.214 12809.309 - 12868.887: 86.4706% ( 124) 00:10:23.214 12868.887 - 12928.465: 87.4908% ( 111) 00:10:23.214 12928.465 - 12988.044: 88.4835% ( 108) 00:10:23.214 12988.044 - 13047.622: 89.5129% ( 112) 00:10:23.214 13047.622 - 13107.200: 90.5147% ( 109) 00:10:23.214 13107.200 - 13166.778: 91.4338% ( 100) 00:10:23.214 13166.778 - 13226.356: 92.2978% ( 94) 00:10:23.214 13226.356 - 13285.935: 93.2353% ( 102) 00:10:23.214 13285.935 - 13345.513: 94.0257% ( 86) 00:10:23.214 13345.513 - 13405.091: 94.7151% ( 75) 00:10:23.214 13405.091 - 13464.669: 95.3768% ( 72) 00:10:23.214 13464.669 - 13524.247: 95.9283% ( 60) 00:10:23.214 13524.247 - 13583.825: 96.4890% ( 61) 00:10:23.214 13583.825 - 13643.404: 96.9301% ( 48) 00:10:23.214 13643.404 - 13702.982: 97.2426% ( 34) 00:10:23.214 13702.982 - 13762.560: 97.4816% ( 26) 00:10:23.214 13762.560 - 13822.138: 97.6562% ( 19) 00:10:23.214 13822.138 - 13881.716: 97.8309% ( 19) 00:10:23.214 13881.716 - 13941.295: 97.9871% ( 17) 00:10:23.214 13941.295 - 14000.873: 98.1434% ( 17) 00:10:23.215 14000.873 - 14060.451: 98.2904% ( 16) 00:10:23.215 14060.451 - 14120.029: 98.4283% ( 15) 00:10:23.215 14120.029 - 14179.607: 98.5386% ( 12) 00:10:23.215 14179.607 - 14239.185: 98.6397% ( 11) 00:10:23.215 14239.185 - 14298.764: 98.6857% ( 5) 00:10:23.215 14298.764 - 14358.342: 98.7408% ( 6) 00:10:23.215 14358.342 - 14417.920: 98.7960% ( 6) 00:10:23.215 14417.920 - 14477.498: 98.8143% ( 2) 00:10:23.215 14477.498 - 14537.076: 98.8235% ( 1) 00:10:23.215 25380.305 - 25499.462: 98.8327% ( 1) 00:10:23.215 25499.462 - 25618.618: 98.8603% ( 3) 00:10:23.215 25618.618 - 25737.775: 98.8971% ( 4) 00:10:23.215 25737.775 - 25856.931: 98.9338% ( 4) 00:10:23.215 25856.931 - 25976.087: 98.9706% ( 4) 00:10:23.215 25976.087 - 26095.244: 99.0074% ( 4) 00:10:23.215 26095.244 - 26214.400: 99.0441% ( 4) 00:10:23.215 26214.400 - 26333.556: 99.0809% ( 4) 00:10:23.215 26333.556 - 26452.713: 99.1176% ( 4) 00:10:23.215 26452.713 - 26571.869: 99.1544% ( 4) 00:10:23.215 26571.869 - 26691.025: 99.1912% ( 4) 00:10:23.215 26691.025 - 26810.182: 99.2279% ( 4) 00:10:23.215 26810.182 - 26929.338: 99.2647% ( 4) 00:10:23.215 26929.338 - 27048.495: 99.3015% ( 4) 00:10:23.215 27048.495 - 27167.651: 99.3382% ( 4) 00:10:23.215 27167.651 - 27286.807: 99.3750% ( 4) 00:10:23.215 27286.807 - 27405.964: 99.4118% ( 4) 00:10:23.215 27405.964 - 27525.120: 99.4485% ( 4) 00:10:23.215 27525.120 - 27644.276: 99.4853% ( 4) 00:10:23.215 27644.276 - 27763.433: 99.5129% ( 3) 00:10:23.215 27763.433 - 27882.589: 99.5496% ( 4) 00:10:23.215 27882.589 - 28001.745: 99.5864% ( 4) 00:10:23.215 28001.745 - 28120.902: 99.6232% ( 4) 00:10:23.215 28120.902 - 28240.058: 99.6599% ( 4) 00:10:23.215 28240.058 - 28359.215: 99.7059% ( 5) 00:10:23.215 28359.215 - 28478.371: 99.7426% ( 4) 00:10:23.215 28478.371 - 28597.527: 99.7794% ( 4) 00:10:23.215 28597.527 - 28716.684: 99.8162% ( 4) 00:10:23.215 28716.684 - 28835.840: 99.8529% ( 4) 00:10:23.215 28835.840 - 28954.996: 99.8897% ( 4) 00:10:23.215 28954.996 - 29074.153: 99.9357% ( 5) 00:10:23.215 29074.153 - 29193.309: 99.9724% ( 4) 00:10:23.215 29193.309 - 29312.465: 100.0000% ( 3) 00:10:23.215 00:10:23.215 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:23.215 ============================================================================== 00:10:23.215 Range in us Cumulative IO count 00:10:23.215 9055.884 - 9115.462: 0.0184% ( 2) 00:10:23.215 9115.462 - 9175.040: 0.0643% ( 5) 00:10:23.215 9175.040 - 9234.618: 0.1379% ( 8) 00:10:23.215 9234.618 - 9294.196: 0.2022% ( 7) 00:10:23.215 9294.196 - 9353.775: 0.2482% ( 5) 00:10:23.215 9353.775 - 9413.353: 0.3493% ( 11) 00:10:23.215 9413.353 - 9472.931: 0.4871% ( 15) 00:10:23.215 9472.931 - 9532.509: 0.6801% ( 21) 00:10:23.215 9532.509 - 9592.087: 0.9926% ( 34) 00:10:23.215 9592.087 - 9651.665: 1.3419% ( 38) 00:10:23.215 9651.665 - 9711.244: 1.6544% ( 34) 00:10:23.215 9711.244 - 9770.822: 2.2243% ( 62) 00:10:23.215 9770.822 - 9830.400: 2.6287% ( 44) 00:10:23.215 9830.400 - 9889.978: 3.1526% ( 57) 00:10:23.215 9889.978 - 9949.556: 3.8971% ( 81) 00:10:23.215 9949.556 - 10009.135: 4.6140% ( 78) 00:10:23.215 10009.135 - 10068.713: 5.4504% ( 91) 00:10:23.215 10068.713 - 10128.291: 6.3235% ( 95) 00:10:23.215 10128.291 - 10187.869: 7.2886% ( 105) 00:10:23.215 10187.869 - 10247.447: 8.3640% ( 117) 00:10:23.215 10247.447 - 10307.025: 9.6507% ( 140) 00:10:23.215 10307.025 - 10366.604: 10.9191% ( 138) 00:10:23.215 10366.604 - 10426.182: 12.1599% ( 135) 00:10:23.215 10426.182 - 10485.760: 13.5478% ( 151) 00:10:23.215 10485.760 - 10545.338: 14.8713% ( 144) 00:10:23.215 10545.338 - 10604.916: 16.2040% ( 145) 00:10:23.215 10604.916 - 10664.495: 17.6011% ( 152) 00:10:23.215 10664.495 - 10724.073: 19.1636% ( 170) 00:10:23.215 10724.073 - 10783.651: 20.7996% ( 178) 00:10:23.215 10783.651 - 10843.229: 22.3621% ( 170) 00:10:23.215 10843.229 - 10902.807: 24.0165% ( 180) 00:10:23.215 10902.807 - 10962.385: 25.7445% ( 188) 00:10:23.215 10962.385 - 11021.964: 27.5000% ( 191) 00:10:23.215 11021.964 - 11081.542: 29.3290% ( 199) 00:10:23.215 11081.542 - 11141.120: 31.1489% ( 198) 00:10:23.215 11141.120 - 11200.698: 33.1342% ( 216) 00:10:23.215 11200.698 - 11260.276: 35.2941% ( 235) 00:10:23.215 11260.276 - 11319.855: 37.4449% ( 234) 00:10:23.215 11319.855 - 11379.433: 39.6691% ( 242) 00:10:23.215 11379.433 - 11439.011: 42.0037% ( 254) 00:10:23.215 11439.011 - 11498.589: 44.4945% ( 271) 00:10:23.215 11498.589 - 11558.167: 46.9118% ( 263) 00:10:23.215 11558.167 - 11617.745: 49.3566% ( 266) 00:10:23.215 11617.745 - 11677.324: 51.7096% ( 256) 00:10:23.215 11677.324 - 11736.902: 53.9982% ( 249) 00:10:23.215 11736.902 - 11796.480: 56.2224% ( 242) 00:10:23.215 11796.480 - 11856.058: 58.3456% ( 231) 00:10:23.215 11856.058 - 11915.636: 60.4779% ( 232) 00:10:23.215 11915.636 - 11975.215: 62.5827% ( 229) 00:10:23.215 11975.215 - 12034.793: 64.6140% ( 221) 00:10:23.215 12034.793 - 12094.371: 66.5809% ( 214) 00:10:23.215 12094.371 - 12153.949: 68.5202% ( 211) 00:10:23.215 12153.949 - 12213.527: 70.3860% ( 203) 00:10:23.215 12213.527 - 12273.105: 72.2886% ( 207) 00:10:23.215 12273.105 - 12332.684: 74.0349% ( 190) 00:10:23.215 12332.684 - 12392.262: 75.8180% ( 194) 00:10:23.215 12392.262 - 12451.840: 77.4449% ( 177) 00:10:23.215 12451.840 - 12511.418: 79.0165% ( 171) 00:10:23.215 12511.418 - 12570.996: 80.4779% ( 159) 00:10:23.215 12570.996 - 12630.575: 81.9485% ( 160) 00:10:23.215 12630.575 - 12690.153: 83.2537% ( 142) 00:10:23.215 12690.153 - 12749.731: 84.5404% ( 140) 00:10:23.215 12749.731 - 12809.309: 85.6893% ( 125) 00:10:23.215 12809.309 - 12868.887: 86.8566% ( 127) 00:10:23.215 12868.887 - 12928.465: 87.9596% ( 120) 00:10:23.215 12928.465 - 12988.044: 89.0257% ( 116) 00:10:23.215 12988.044 - 13047.622: 89.9908% ( 105) 00:10:23.215 13047.622 - 13107.200: 90.9467% ( 104) 00:10:23.215 13107.200 - 13166.778: 91.8107% ( 94) 00:10:23.215 13166.778 - 13226.356: 92.6746% ( 94) 00:10:23.215 13226.356 - 13285.935: 93.4926% ( 89) 00:10:23.215 13285.935 - 13345.513: 94.1912% ( 76) 00:10:23.215 13345.513 - 13405.091: 94.8805% ( 75) 00:10:23.215 13405.091 - 13464.669: 95.5147% ( 69) 00:10:23.215 13464.669 - 13524.247: 96.1121% ( 65) 00:10:23.215 13524.247 - 13583.825: 96.5809% ( 51) 00:10:23.215 13583.825 - 13643.404: 96.9577% ( 41) 00:10:23.215 13643.404 - 13702.982: 97.2794% ( 35) 00:10:23.215 13702.982 - 13762.560: 97.5551% ( 30) 00:10:23.215 13762.560 - 13822.138: 97.7390% ( 20) 00:10:23.215 13822.138 - 13881.716: 97.9228% ( 20) 00:10:23.215 13881.716 - 13941.295: 98.0790% ( 17) 00:10:23.215 13941.295 - 14000.873: 98.1710% ( 10) 00:10:23.215 14000.873 - 14060.451: 98.2629% ( 10) 00:10:23.215 14060.451 - 14120.029: 98.3364% ( 8) 00:10:23.215 14120.029 - 14179.607: 98.4283% ( 10) 00:10:23.215 14179.607 - 14239.185: 98.5018% ( 8) 00:10:23.215 14239.185 - 14298.764: 98.5662% ( 7) 00:10:23.215 14298.764 - 14358.342: 98.6397% ( 8) 00:10:23.215 14358.342 - 14417.920: 98.7040% ( 7) 00:10:23.215 14417.920 - 14477.498: 98.7500% ( 5) 00:10:23.215 14477.498 - 14537.076: 98.7868% ( 4) 00:10:23.215 14537.076 - 14596.655: 98.8143% ( 3) 00:10:23.215 14596.655 - 14656.233: 98.8235% ( 1) 00:10:23.215 25022.836 - 25141.993: 98.8327% ( 1) 00:10:23.215 25141.993 - 25261.149: 98.8603% ( 3) 00:10:23.215 25261.149 - 25380.305: 98.8971% ( 4) 00:10:23.215 25380.305 - 25499.462: 98.9338% ( 4) 00:10:23.215 25499.462 - 25618.618: 98.9706% ( 4) 00:10:23.215 25618.618 - 25737.775: 99.0074% ( 4) 00:10:23.215 25737.775 - 25856.931: 99.0441% ( 4) 00:10:23.215 25856.931 - 25976.087: 99.0809% ( 4) 00:10:23.215 25976.087 - 26095.244: 99.1176% ( 4) 00:10:23.215 26095.244 - 26214.400: 99.1544% ( 4) 00:10:23.215 26214.400 - 26333.556: 99.2004% ( 5) 00:10:23.215 26333.556 - 26452.713: 99.2188% ( 2) 00:10:23.215 26452.713 - 26571.869: 99.2555% ( 4) 00:10:23.215 26571.869 - 26691.025: 99.2923% ( 4) 00:10:23.215 26691.025 - 26810.182: 99.3290% ( 4) 00:10:23.215 26810.182 - 26929.338: 99.3658% ( 4) 00:10:23.215 26929.338 - 27048.495: 99.4026% ( 4) 00:10:23.215 27048.495 - 27167.651: 99.4393% ( 4) 00:10:23.215 27167.651 - 27286.807: 99.4761% ( 4) 00:10:23.215 27286.807 - 27405.964: 99.5037% ( 3) 00:10:23.215 27405.964 - 27525.120: 99.5312% ( 3) 00:10:23.215 27525.120 - 27644.276: 99.5772% ( 5) 00:10:23.215 27644.276 - 27763.433: 99.6140% ( 4) 00:10:23.215 27763.433 - 27882.589: 99.6415% ( 3) 00:10:23.215 27882.589 - 28001.745: 99.6783% ( 4) 00:10:23.215 28001.745 - 28120.902: 99.7151% ( 4) 00:10:23.215 28120.902 - 28240.058: 99.7426% ( 3) 00:10:23.215 28240.058 - 28359.215: 99.7794% ( 4) 00:10:23.215 28359.215 - 28478.371: 99.8254% ( 5) 00:10:23.215 28478.371 - 28597.527: 99.8621% ( 4) 00:10:23.215 28597.527 - 28716.684: 99.8989% ( 4) 00:10:23.215 28716.684 - 28835.840: 99.9357% ( 4) 00:10:23.215 28835.840 - 28954.996: 99.9724% ( 4) 00:10:23.215 28954.996 - 29074.153: 100.0000% ( 3) 00:10:23.215 00:10:23.215 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:23.215 ============================================================================== 00:10:23.215 Range in us Cumulative IO count 00:10:23.215 9413.353 - 9472.931: 0.0735% ( 8) 00:10:23.215 9472.931 - 9532.509: 0.3401% ( 29) 00:10:23.215 9532.509 - 9592.087: 0.6893% ( 38) 00:10:23.215 9592.087 - 9651.665: 1.0662% ( 41) 00:10:23.215 9651.665 - 9711.244: 1.5533% ( 53) 00:10:23.215 9711.244 - 9770.822: 2.2151% ( 72) 00:10:23.215 9770.822 - 9830.400: 2.8125% ( 65) 00:10:23.215 9830.400 - 9889.978: 3.4283% ( 67) 00:10:23.215 9889.978 - 9949.556: 4.1176% ( 75) 00:10:23.215 9949.556 - 10009.135: 4.7335% ( 67) 00:10:23.215 10009.135 - 10068.713: 5.3768% ( 70) 00:10:23.216 10068.713 - 10128.291: 6.2316% ( 93) 00:10:23.216 10128.291 - 10187.869: 7.3070% ( 117) 00:10:23.216 10187.869 - 10247.447: 8.3548% ( 114) 00:10:23.216 10247.447 - 10307.025: 9.5588% ( 131) 00:10:23.216 10307.025 - 10366.604: 10.8364% ( 139) 00:10:23.216 10366.604 - 10426.182: 12.0312% ( 130) 00:10:23.216 10426.182 - 10485.760: 13.2904% ( 137) 00:10:23.216 10485.760 - 10545.338: 14.7978% ( 164) 00:10:23.216 10545.338 - 10604.916: 16.2408% ( 157) 00:10:23.216 10604.916 - 10664.495: 17.8125% ( 171) 00:10:23.216 10664.495 - 10724.073: 19.3107% ( 163) 00:10:23.216 10724.073 - 10783.651: 20.8180% ( 164) 00:10:23.216 10783.651 - 10843.229: 22.4173% ( 174) 00:10:23.216 10843.229 - 10902.807: 24.2004% ( 194) 00:10:23.216 10902.807 - 10962.385: 26.1305% ( 210) 00:10:23.216 10962.385 - 11021.964: 28.0147% ( 205) 00:10:23.216 11021.964 - 11081.542: 29.9449% ( 210) 00:10:23.216 11081.542 - 11141.120: 31.9118% ( 214) 00:10:23.216 11141.120 - 11200.698: 33.9430% ( 221) 00:10:23.216 11200.698 - 11260.276: 36.0018% ( 224) 00:10:23.216 11260.276 - 11319.855: 38.1526% ( 234) 00:10:23.216 11319.855 - 11379.433: 40.3217% ( 236) 00:10:23.216 11379.433 - 11439.011: 42.6471% ( 253) 00:10:23.216 11439.011 - 11498.589: 44.8438% ( 239) 00:10:23.216 11498.589 - 11558.167: 47.1324% ( 249) 00:10:23.216 11558.167 - 11617.745: 49.5956% ( 268) 00:10:23.216 11617.745 - 11677.324: 51.9853% ( 260) 00:10:23.216 11677.324 - 11736.902: 54.3199% ( 254) 00:10:23.216 11736.902 - 11796.480: 56.5993% ( 248) 00:10:23.216 11796.480 - 11856.058: 58.7040% ( 229) 00:10:23.216 11856.058 - 11915.636: 60.7996% ( 228) 00:10:23.216 11915.636 - 11975.215: 62.8125% ( 219) 00:10:23.216 11975.215 - 12034.793: 64.7978% ( 216) 00:10:23.216 12034.793 - 12094.371: 66.6728% ( 204) 00:10:23.216 12094.371 - 12153.949: 68.5202% ( 201) 00:10:23.216 12153.949 - 12213.527: 70.4044% ( 205) 00:10:23.216 12213.527 - 12273.105: 72.1324% ( 188) 00:10:23.216 12273.105 - 12332.684: 73.9154% ( 194) 00:10:23.216 12332.684 - 12392.262: 75.6985% ( 194) 00:10:23.216 12392.262 - 12451.840: 77.4357% ( 189) 00:10:23.216 12451.840 - 12511.418: 79.0901% ( 180) 00:10:23.216 12511.418 - 12570.996: 80.5790% ( 162) 00:10:23.216 12570.996 - 12630.575: 82.0404% ( 159) 00:10:23.216 12630.575 - 12690.153: 83.3272% ( 140) 00:10:23.216 12690.153 - 12749.731: 84.5312% ( 131) 00:10:23.216 12749.731 - 12809.309: 85.6710% ( 124) 00:10:23.216 12809.309 - 12868.887: 86.7831% ( 121) 00:10:23.216 12868.887 - 12928.465: 87.8493% ( 116) 00:10:23.216 12928.465 - 12988.044: 88.8787% ( 112) 00:10:23.216 12988.044 - 13047.622: 89.8621% ( 107) 00:10:23.216 13047.622 - 13107.200: 90.8364% ( 106) 00:10:23.216 13107.200 - 13166.778: 91.6912% ( 93) 00:10:23.216 13166.778 - 13226.356: 92.5551% ( 94) 00:10:23.216 13226.356 - 13285.935: 93.3180% ( 83) 00:10:23.216 13285.935 - 13345.513: 94.0165% ( 76) 00:10:23.216 13345.513 - 13405.091: 94.6783% ( 72) 00:10:23.216 13405.091 - 13464.669: 95.2941% ( 67) 00:10:23.216 13464.669 - 13524.247: 95.7996% ( 55) 00:10:23.216 13524.247 - 13583.825: 96.3327% ( 58) 00:10:23.216 13583.825 - 13643.404: 96.7463% ( 45) 00:10:23.216 13643.404 - 13702.982: 97.1048% ( 39) 00:10:23.216 13702.982 - 13762.560: 97.3529% ( 27) 00:10:23.216 13762.560 - 13822.138: 97.5735% ( 24) 00:10:23.216 13822.138 - 13881.716: 97.7849% ( 23) 00:10:23.216 13881.716 - 13941.295: 97.9228% ( 15) 00:10:23.216 13941.295 - 14000.873: 98.0607% ( 15) 00:10:23.216 14000.873 - 14060.451: 98.1710% ( 12) 00:10:23.216 14060.451 - 14120.029: 98.2537% ( 9) 00:10:23.216 14120.029 - 14179.607: 98.3364% ( 9) 00:10:23.216 14179.607 - 14239.185: 98.4007% ( 7) 00:10:23.216 14239.185 - 14298.764: 98.4743% ( 8) 00:10:23.216 14298.764 - 14358.342: 98.5662% ( 10) 00:10:23.216 14358.342 - 14417.920: 98.6397% ( 8) 00:10:23.216 14417.920 - 14477.498: 98.7040% ( 7) 00:10:23.216 14477.498 - 14537.076: 98.7500% ( 5) 00:10:23.216 14537.076 - 14596.655: 98.7776% ( 3) 00:10:23.216 14596.655 - 14656.233: 98.7960% ( 2) 00:10:23.216 14656.233 - 14715.811: 98.8143% ( 2) 00:10:23.216 14715.811 - 14775.389: 98.8235% ( 1) 00:10:23.216 25499.462 - 25618.618: 98.8419% ( 2) 00:10:23.216 25618.618 - 25737.775: 98.9062% ( 7) 00:10:23.216 25737.775 - 25856.931: 98.9890% ( 9) 00:10:23.216 25856.931 - 25976.087: 99.0901% ( 11) 00:10:23.216 25976.087 - 26095.244: 99.2096% ( 13) 00:10:23.216 26095.244 - 26214.400: 99.2371% ( 3) 00:10:23.216 26214.400 - 26333.556: 99.2739% ( 4) 00:10:23.216 26333.556 - 26452.713: 99.3015% ( 3) 00:10:23.216 26452.713 - 26571.869: 99.3290% ( 3) 00:10:23.216 26571.869 - 26691.025: 99.3658% ( 4) 00:10:23.216 26691.025 - 26810.182: 99.3842% ( 2) 00:10:23.216 26810.182 - 26929.338: 99.4118% ( 3) 00:10:23.216 26929.338 - 27048.495: 99.4485% ( 4) 00:10:23.216 27048.495 - 27167.651: 99.4761% ( 3) 00:10:23.216 27167.651 - 27286.807: 99.5037% ( 3) 00:10:23.216 27286.807 - 27405.964: 99.5312% ( 3) 00:10:23.216 27405.964 - 27525.120: 99.5680% ( 4) 00:10:23.216 27525.120 - 27644.276: 99.6048% ( 4) 00:10:23.216 27644.276 - 27763.433: 99.6415% ( 4) 00:10:23.216 27763.433 - 27882.589: 99.6783% ( 4) 00:10:23.216 27882.589 - 28001.745: 99.7151% ( 4) 00:10:23.216 28001.745 - 28120.902: 99.7610% ( 5) 00:10:23.216 28120.902 - 28240.058: 99.7978% ( 4) 00:10:23.216 28240.058 - 28359.215: 99.8438% ( 5) 00:10:23.216 28359.215 - 28478.371: 99.8805% ( 4) 00:10:23.216 28478.371 - 28597.527: 99.9173% ( 4) 00:10:23.216 28597.527 - 28716.684: 99.9632% ( 5) 00:10:23.216 28716.684 - 28835.840: 100.0000% ( 4) 00:10:23.216 00:10:23.216 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:23.216 ============================================================================== 00:10:23.216 Range in us Cumulative IO count 00:10:23.216 9234.618 - 9294.196: 0.0551% ( 6) 00:10:23.216 9294.196 - 9353.775: 0.1654% ( 12) 00:10:23.216 9353.775 - 9413.353: 0.2849% ( 13) 00:10:23.216 9413.353 - 9472.931: 0.4320% ( 16) 00:10:23.216 9472.931 - 9532.509: 0.5974% ( 18) 00:10:23.216 9532.509 - 9592.087: 0.7169% ( 13) 00:10:23.216 9592.087 - 9651.665: 0.8915% ( 19) 00:10:23.216 9651.665 - 9711.244: 1.2224% ( 36) 00:10:23.216 9711.244 - 9770.822: 1.5441% ( 35) 00:10:23.216 9770.822 - 9830.400: 1.8934% ( 38) 00:10:23.216 9830.400 - 9889.978: 2.3070% ( 45) 00:10:23.216 9889.978 - 9949.556: 3.0331% ( 79) 00:10:23.216 9949.556 - 10009.135: 3.8419% ( 88) 00:10:23.216 10009.135 - 10068.713: 4.7886% ( 103) 00:10:23.216 10068.713 - 10128.291: 5.8180% ( 112) 00:10:23.216 10128.291 - 10187.869: 7.0037% ( 129) 00:10:23.216 10187.869 - 10247.447: 8.2904% ( 140) 00:10:23.216 10247.447 - 10307.025: 9.7610% ( 160) 00:10:23.216 10307.025 - 10366.604: 11.0478% ( 140) 00:10:23.216 10366.604 - 10426.182: 12.3070% ( 137) 00:10:23.216 10426.182 - 10485.760: 13.5386% ( 134) 00:10:23.216 10485.760 - 10545.338: 14.8713% ( 145) 00:10:23.216 10545.338 - 10604.916: 16.2960% ( 155) 00:10:23.216 10604.916 - 10664.495: 17.8309% ( 167) 00:10:23.216 10664.495 - 10724.073: 19.3015% ( 160) 00:10:23.216 10724.073 - 10783.651: 20.7629% ( 159) 00:10:23.216 10783.651 - 10843.229: 22.3070% ( 168) 00:10:23.216 10843.229 - 10902.807: 23.9338% ( 177) 00:10:23.216 10902.807 - 10962.385: 25.6526% ( 187) 00:10:23.216 10962.385 - 11021.964: 27.7206% ( 225) 00:10:23.216 11021.964 - 11081.542: 29.7794% ( 224) 00:10:23.216 11081.542 - 11141.120: 31.7463% ( 214) 00:10:23.216 11141.120 - 11200.698: 33.8235% ( 226) 00:10:23.216 11200.698 - 11260.276: 35.9926% ( 236) 00:10:23.216 11260.276 - 11319.855: 38.2445% ( 245) 00:10:23.216 11319.855 - 11379.433: 40.5607% ( 252) 00:10:23.216 11379.433 - 11439.011: 42.7206% ( 235) 00:10:23.216 11439.011 - 11498.589: 44.9265% ( 240) 00:10:23.216 11498.589 - 11558.167: 47.1507% ( 242) 00:10:23.216 11558.167 - 11617.745: 49.5404% ( 260) 00:10:23.216 11617.745 - 11677.324: 51.9393% ( 261) 00:10:23.216 11677.324 - 11736.902: 54.2555% ( 252) 00:10:23.216 11736.902 - 11796.480: 56.5901% ( 254) 00:10:23.216 11796.480 - 11856.058: 58.8879% ( 250) 00:10:23.216 11856.058 - 11915.636: 61.0570% ( 236) 00:10:23.216 11915.636 - 11975.215: 63.2261% ( 236) 00:10:23.216 11975.215 - 12034.793: 65.1562% ( 210) 00:10:23.216 12034.793 - 12094.371: 67.1599% ( 218) 00:10:23.216 12094.371 - 12153.949: 69.1452% ( 216) 00:10:23.216 12153.949 - 12213.527: 70.9926% ( 201) 00:10:23.216 12213.527 - 12273.105: 72.7574% ( 192) 00:10:23.216 12273.105 - 12332.684: 74.4485% ( 184) 00:10:23.216 12332.684 - 12392.262: 76.1029% ( 180) 00:10:23.216 12392.262 - 12451.840: 77.6746% ( 171) 00:10:23.216 12451.840 - 12511.418: 79.1636% ( 162) 00:10:23.216 12511.418 - 12570.996: 80.6710% ( 164) 00:10:23.216 12570.996 - 12630.575: 82.0221% ( 147) 00:10:23.216 12630.575 - 12690.153: 83.3088% ( 140) 00:10:23.216 12690.153 - 12749.731: 84.5956% ( 140) 00:10:23.216 12749.731 - 12809.309: 85.7812% ( 129) 00:10:23.216 12809.309 - 12868.887: 86.9669% ( 129) 00:10:23.216 12868.887 - 12928.465: 88.0974% ( 123) 00:10:23.216 12928.465 - 12988.044: 89.1176% ( 111) 00:10:23.216 12988.044 - 13047.622: 90.0643% ( 103) 00:10:23.216 13047.622 - 13107.200: 90.9283% ( 94) 00:10:23.216 13107.200 - 13166.778: 91.7463% ( 89) 00:10:23.216 13166.778 - 13226.356: 92.6287% ( 96) 00:10:23.216 13226.356 - 13285.935: 93.4007% ( 84) 00:10:23.216 13285.935 - 13345.513: 94.0901% ( 75) 00:10:23.216 13345.513 - 13405.091: 94.7059% ( 67) 00:10:23.216 13405.091 - 13464.669: 95.2390% ( 58) 00:10:23.216 13464.669 - 13524.247: 95.7445% ( 55) 00:10:23.216 13524.247 - 13583.825: 96.1581% ( 45) 00:10:23.216 13583.825 - 13643.404: 96.5257% ( 40) 00:10:23.216 13643.404 - 13702.982: 96.8290% ( 33) 00:10:23.216 13702.982 - 13762.560: 97.1048% ( 30) 00:10:23.216 13762.560 - 13822.138: 97.3438% ( 26) 00:10:23.216 13822.138 - 13881.716: 97.5368% ( 21) 00:10:23.217 13881.716 - 13941.295: 97.6930% ( 17) 00:10:23.217 13941.295 - 14000.873: 97.7665% ( 8) 00:10:23.217 14000.873 - 14060.451: 97.8493% ( 9) 00:10:23.217 14060.451 - 14120.029: 97.9320% ( 9) 00:10:23.217 14120.029 - 14179.607: 98.0055% ( 8) 00:10:23.217 14179.607 - 14239.185: 98.0974% ( 10) 00:10:23.217 14239.185 - 14298.764: 98.1710% ( 8) 00:10:23.217 14298.764 - 14358.342: 98.2629% ( 10) 00:10:23.217 14358.342 - 14417.920: 98.3364% ( 8) 00:10:23.217 14417.920 - 14477.498: 98.4283% ( 10) 00:10:23.217 14477.498 - 14537.076: 98.5110% ( 9) 00:10:23.217 14537.076 - 14596.655: 98.5938% ( 9) 00:10:23.217 14596.655 - 14656.233: 98.6581% ( 7) 00:10:23.217 14656.233 - 14715.811: 98.6949% ( 4) 00:10:23.217 14715.811 - 14775.389: 98.7500% ( 6) 00:10:23.217 14775.389 - 14834.967: 98.7776% ( 3) 00:10:23.217 14834.967 - 14894.545: 98.7960% ( 2) 00:10:23.217 14894.545 - 14954.124: 98.8143% ( 2) 00:10:23.217 14954.124 - 15013.702: 98.8235% ( 1) 00:10:23.217 25022.836 - 25141.993: 98.8419% ( 2) 00:10:23.217 25141.993 - 25261.149: 98.9522% ( 12) 00:10:23.217 25261.149 - 25380.305: 99.0441% ( 10) 00:10:23.217 25380.305 - 25499.462: 99.1176% ( 8) 00:10:23.217 25499.462 - 25618.618: 99.1544% ( 4) 00:10:23.217 25618.618 - 25737.775: 99.1912% ( 4) 00:10:23.217 25737.775 - 25856.931: 99.2188% ( 3) 00:10:23.217 25856.931 - 25976.087: 99.2555% ( 4) 00:10:23.217 25976.087 - 26095.244: 99.2923% ( 4) 00:10:23.217 26095.244 - 26214.400: 99.3290% ( 4) 00:10:23.217 26214.400 - 26333.556: 99.3566% ( 3) 00:10:23.217 26333.556 - 26452.713: 99.3934% ( 4) 00:10:23.217 26452.713 - 26571.869: 99.4301% ( 4) 00:10:23.217 26571.869 - 26691.025: 99.4669% ( 4) 00:10:23.217 26691.025 - 26810.182: 99.4945% ( 3) 00:10:23.217 26810.182 - 26929.338: 99.5312% ( 4) 00:10:23.217 26929.338 - 27048.495: 99.5772% ( 5) 00:10:23.217 27048.495 - 27167.651: 99.6140% ( 4) 00:10:23.217 27167.651 - 27286.807: 99.6415% ( 3) 00:10:23.217 27286.807 - 27405.964: 99.6875% ( 5) 00:10:23.217 27405.964 - 27525.120: 99.7243% ( 4) 00:10:23.217 27525.120 - 27644.276: 99.7702% ( 5) 00:10:23.217 27644.276 - 27763.433: 99.7978% ( 3) 00:10:23.217 27763.433 - 27882.589: 99.8346% ( 4) 00:10:23.217 27882.589 - 28001.745: 99.8713% ( 4) 00:10:23.217 28001.745 - 28120.902: 99.9173% ( 5) 00:10:23.217 28120.902 - 28240.058: 99.9540% ( 4) 00:10:23.217 28240.058 - 28359.215: 100.0000% ( 5) 00:10:23.217 00:10:23.217 ************************************ 00:10:23.217 END TEST nvme_perf 00:10:23.217 ************************************ 00:10:23.217 20:56:44 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:23.217 00:10:23.217 real 0m2.855s 00:10:23.217 user 0m2.428s 00:10:23.217 sys 0m0.312s 00:10:23.217 20:56:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:23.217 20:56:44 -- common/autotest_common.sh@10 -- # set +x 00:10:23.476 20:56:44 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:23.476 20:56:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:23.476 20:56:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.476 20:56:44 -- common/autotest_common.sh@10 -- # set +x 00:10:23.476 ************************************ 00:10:23.476 START TEST nvme_hello_world 00:10:23.476 ************************************ 00:10:23.476 20:56:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:23.735 Initializing NVMe Controllers 00:10:23.735 Attached to 0000:00:06.0 00:10:23.735 Namespace ID: 1 size: 6GB 00:10:23.735 Attached to 0000:00:07.0 00:10:23.735 Namespace ID: 1 size: 5GB 00:10:23.735 Attached to 0000:00:09.0 00:10:23.735 Namespace ID: 1 size: 1GB 00:10:23.735 Attached to 0000:00:08.0 00:10:23.735 Namespace ID: 1 size: 4GB 00:10:23.735 Namespace ID: 2 size: 4GB 00:10:23.735 Namespace ID: 3 size: 4GB 00:10:23.735 Initialization complete. 00:10:23.735 INFO: using host memory buffer for IO 00:10:23.735 Hello world! 00:10:23.735 INFO: using host memory buffer for IO 00:10:23.735 Hello world! 00:10:23.735 INFO: using host memory buffer for IO 00:10:23.735 Hello world! 00:10:23.735 INFO: using host memory buffer for IO 00:10:23.735 Hello world! 00:10:23.735 INFO: using host memory buffer for IO 00:10:23.735 Hello world! 00:10:23.735 INFO: using host memory buffer for IO 00:10:23.735 Hello world! 00:10:23.735 ************************************ 00:10:23.735 END TEST nvme_hello_world 00:10:23.735 ************************************ 00:10:23.735 00:10:23.735 real 0m0.397s 00:10:23.735 user 0m0.211s 00:10:23.735 sys 0m0.139s 00:10:23.735 20:56:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:23.735 20:56:44 -- common/autotest_common.sh@10 -- # set +x 00:10:23.735 20:56:44 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:23.735 20:56:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:23.735 20:56:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.735 20:56:44 -- common/autotest_common.sh@10 -- # set +x 00:10:23.735 ************************************ 00:10:23.735 START TEST nvme_sgl 00:10:23.735 ************************************ 00:10:23.735 20:56:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:23.994 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:10:23.994 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:10:23.994 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:10:24.259 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:10:24.259 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:10:24.259 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:10:24.259 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:10:24.259 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:10:24.259 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:10:24.259 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:10:24.259 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:10:24.259 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:10:24.259 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:10:24.260 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:10:24.260 NVMe Readv/Writev Request test 00:10:24.260 Attached to 0000:00:06.0 00:10:24.260 Attached to 0000:00:07.0 00:10:24.260 Attached to 0000:00:09.0 00:10:24.260 Attached to 0000:00:08.0 00:10:24.260 0000:00:06.0: build_io_request_2 test passed 00:10:24.260 0000:00:06.0: build_io_request_4 test passed 00:10:24.260 0000:00:06.0: build_io_request_5 test passed 00:10:24.260 0000:00:06.0: build_io_request_6 test passed 00:10:24.260 0000:00:06.0: build_io_request_7 test passed 00:10:24.260 0000:00:06.0: build_io_request_10 test passed 00:10:24.260 0000:00:07.0: build_io_request_2 test passed 00:10:24.260 0000:00:07.0: build_io_request_4 test passed 00:10:24.260 0000:00:07.0: build_io_request_5 test passed 00:10:24.260 0000:00:07.0: build_io_request_6 test passed 00:10:24.260 0000:00:07.0: build_io_request_7 test passed 00:10:24.260 0000:00:07.0: build_io_request_10 test passed 00:10:24.260 Cleaning up... 00:10:24.523 00:10:24.523 real 0m0.562s 00:10:24.523 user 0m0.370s 00:10:24.523 sys 0m0.144s 00:10:24.523 20:56:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:24.523 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:10:24.523 ************************************ 00:10:24.523 END TEST nvme_sgl 00:10:24.523 ************************************ 00:10:24.523 20:56:45 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:24.523 20:56:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:24.523 20:56:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.523 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:10:24.523 ************************************ 00:10:24.523 START TEST nvme_e2edp 00:10:24.523 ************************************ 00:10:24.523 20:56:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:24.781 NVMe Write/Read with End-to-End data protection test 00:10:24.781 Attached to 0000:00:06.0 00:10:24.781 Attached to 0000:00:07.0 00:10:24.781 Attached to 0000:00:09.0 00:10:24.781 Attached to 0000:00:08.0 00:10:24.781 Cleaning up... 00:10:24.781 ************************************ 00:10:24.781 END TEST nvme_e2edp 00:10:24.781 ************************************ 00:10:24.781 00:10:24.781 real 0m0.274s 00:10:24.781 user 0m0.100s 00:10:24.781 sys 0m0.131s 00:10:24.781 20:56:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:24.781 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:10:24.781 20:56:45 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:24.781 20:56:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:24.781 20:56:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.781 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:10:24.781 ************************************ 00:10:24.781 START TEST nvme_reserve 00:10:24.781 ************************************ 00:10:24.781 20:56:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:25.039 ===================================================== 00:10:25.039 NVMe Controller at PCI bus 0, device 6, function 0 00:10:25.039 ===================================================== 00:10:25.039 Reservations: Not Supported 00:10:25.039 ===================================================== 00:10:25.039 NVMe Controller at PCI bus 0, device 7, function 0 00:10:25.039 ===================================================== 00:10:25.039 Reservations: Not Supported 00:10:25.039 ===================================================== 00:10:25.039 NVMe Controller at PCI bus 0, device 9, function 0 00:10:25.039 ===================================================== 00:10:25.039 Reservations: Not Supported 00:10:25.039 ===================================================== 00:10:25.039 NVMe Controller at PCI bus 0, device 8, function 0 00:10:25.039 ===================================================== 00:10:25.039 Reservations: Not Supported 00:10:25.039 Reservation test passed 00:10:25.039 00:10:25.039 real 0m0.235s 00:10:25.039 user 0m0.076s 00:10:25.039 sys 0m0.113s 00:10:25.039 ************************************ 00:10:25.039 END TEST nvme_reserve 00:10:25.039 ************************************ 00:10:25.039 20:56:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:25.039 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:10:25.039 20:56:45 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:25.039 20:56:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:25.039 20:56:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.039 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:10:25.039 ************************************ 00:10:25.039 START TEST nvme_err_injection 00:10:25.039 ************************************ 00:10:25.039 20:56:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:25.297 NVMe Error Injection test 00:10:25.297 Attached to 0000:00:06.0 00:10:25.297 Attached to 0000:00:07.0 00:10:25.297 Attached to 0000:00:09.0 00:10:25.297 Attached to 0000:00:08.0 00:10:25.297 0000:00:06.0: get features failed as expected 00:10:25.297 0000:00:07.0: get features failed as expected 00:10:25.297 0000:00:09.0: get features failed as expected 00:10:25.297 0000:00:08.0: get features failed as expected 00:10:25.297 0000:00:06.0: get features successfully as expected 00:10:25.297 0000:00:07.0: get features successfully as expected 00:10:25.297 0000:00:09.0: get features successfully as expected 00:10:25.297 0000:00:08.0: get features successfully as expected 00:10:25.297 0000:00:07.0: read failed as expected 00:10:25.297 0000:00:06.0: read failed as expected 00:10:25.297 0000:00:09.0: read failed as expected 00:10:25.297 0000:00:08.0: read failed as expected 00:10:25.297 0000:00:06.0: read successfully as expected 00:10:25.297 0000:00:07.0: read successfully as expected 00:10:25.297 0000:00:09.0: read successfully as expected 00:10:25.297 0000:00:08.0: read successfully as expected 00:10:25.297 Cleaning up... 00:10:25.297 00:10:25.297 real 0m0.352s 00:10:25.297 user 0m0.152s 00:10:25.297 sys 0m0.145s 00:10:25.297 20:56:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:25.297 ************************************ 00:10:25.297 END TEST nvme_err_injection 00:10:25.297 ************************************ 00:10:25.297 20:56:46 -- common/autotest_common.sh@10 -- # set +x 00:10:25.556 20:56:46 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:25.556 20:56:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:25.556 20:56:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.556 20:56:46 -- common/autotest_common.sh@10 -- # set +x 00:10:25.556 ************************************ 00:10:25.556 START TEST nvme_overhead 00:10:25.556 ************************************ 00:10:25.556 20:56:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:26.936 Initializing NVMe Controllers 00:10:26.936 Attached to 0000:00:06.0 00:10:26.936 Attached to 0000:00:07.0 00:10:26.936 Attached to 0000:00:09.0 00:10:26.936 Attached to 0000:00:08.0 00:10:26.936 Initialization complete. Launching workers. 00:10:26.936 submit (in ns) avg, min, max = 16163.6, 11925.5, 92820.0 00:10:26.936 complete (in ns) avg, min, max = 11403.2, 8247.3, 194337.7 00:10:26.936 00:10:26.936 Submit histogram 00:10:26.936 ================ 00:10:26.936 Range in us Cumulative Count 00:10:26.936 11.869 - 11.927: 0.0117% ( 1) 00:10:26.936 12.567 - 12.625: 0.0234% ( 1) 00:10:26.936 12.800 - 12.858: 0.0701% ( 4) 00:10:26.936 12.858 - 12.916: 0.0817% ( 1) 00:10:26.936 12.916 - 12.975: 0.1168% ( 3) 00:10:26.936 12.975 - 13.033: 0.2336% ( 10) 00:10:26.936 13.033 - 13.091: 0.5956% ( 31) 00:10:26.936 13.091 - 13.149: 1.2963% ( 60) 00:10:26.936 13.149 - 13.207: 3.1064% ( 155) 00:10:26.936 13.207 - 13.265: 5.8975% ( 239) 00:10:26.936 13.265 - 13.324: 9.7513% ( 330) 00:10:26.936 13.324 - 13.382: 13.3248% ( 306) 00:10:26.936 13.382 - 13.440: 17.2837% ( 339) 00:10:26.936 13.440 - 13.498: 21.7330% ( 381) 00:10:26.936 13.498 - 13.556: 26.4744% ( 406) 00:10:26.936 13.556 - 13.615: 32.4769% ( 514) 00:10:26.936 13.615 - 13.673: 38.4678% ( 513) 00:10:26.936 13.673 - 13.731: 43.9799% ( 472) 00:10:26.936 13.731 - 13.789: 48.9898% ( 429) 00:10:26.936 13.789 - 13.847: 52.5400% ( 304) 00:10:26.936 13.847 - 13.905: 55.1909% ( 227) 00:10:26.936 13.905 - 13.964: 57.6784% ( 213) 00:10:26.936 13.964 - 14.022: 59.9089% ( 191) 00:10:26.936 14.022 - 14.080: 61.7657% ( 159) 00:10:26.936 14.080 - 14.138: 63.3189% ( 133) 00:10:26.936 14.138 - 14.196: 64.9072% ( 136) 00:10:26.936 14.196 - 14.255: 66.3552% ( 124) 00:10:26.936 14.255 - 14.313: 67.4413% ( 93) 00:10:26.936 14.313 - 14.371: 68.7376% ( 111) 00:10:26.936 14.371 - 14.429: 69.7186% ( 84) 00:10:26.937 14.429 - 14.487: 70.8747% ( 99) 00:10:26.937 14.487 - 14.545: 71.6221% ( 64) 00:10:26.937 14.545 - 14.604: 72.4162% ( 68) 00:10:26.937 14.604 - 14.662: 73.1753% ( 65) 00:10:26.937 14.662 - 14.720: 73.9928% ( 70) 00:10:26.937 14.720 - 14.778: 75.1723% ( 101) 00:10:26.937 14.778 - 14.836: 76.3985% ( 105) 00:10:26.937 14.836 - 14.895: 77.1459% ( 64) 00:10:26.937 14.895 - 15.011: 78.4655% ( 113) 00:10:26.937 15.011 - 15.127: 79.2830% ( 70) 00:10:26.937 15.127 - 15.244: 80.2522% ( 83) 00:10:26.937 15.244 - 15.360: 81.2566% ( 86) 00:10:26.937 15.360 - 15.476: 81.9923% ( 63) 00:10:26.937 15.476 - 15.593: 82.4828% ( 42) 00:10:26.937 15.593 - 15.709: 82.9382% ( 39) 00:10:26.937 15.709 - 15.825: 83.2886% ( 30) 00:10:26.937 15.825 - 15.942: 83.5338% ( 21) 00:10:26.937 15.942 - 16.058: 83.6856% ( 13) 00:10:26.937 16.058 - 16.175: 83.8141% ( 11) 00:10:26.937 16.175 - 16.291: 83.9542% ( 12) 00:10:26.937 16.291 - 16.407: 84.0009% ( 4) 00:10:26.937 16.407 - 16.524: 84.1060% ( 9) 00:10:26.937 16.524 - 16.640: 84.1411% ( 3) 00:10:26.937 16.640 - 16.756: 84.1995% ( 5) 00:10:26.937 16.756 - 16.873: 84.2462% ( 4) 00:10:26.937 16.873 - 16.989: 84.2812% ( 3) 00:10:26.937 16.989 - 17.105: 84.3046% ( 2) 00:10:26.937 17.105 - 17.222: 84.3162% ( 1) 00:10:26.937 17.338 - 17.455: 84.3396% ( 2) 00:10:26.937 17.455 - 17.571: 84.3630% ( 2) 00:10:26.937 17.571 - 17.687: 84.3980% ( 3) 00:10:26.937 17.687 - 17.804: 84.4097% ( 1) 00:10:26.937 17.920 - 18.036: 84.4213% ( 1) 00:10:26.937 18.036 - 18.153: 84.4330% ( 1) 00:10:26.937 18.153 - 18.269: 84.4681% ( 3) 00:10:26.937 18.269 - 18.385: 84.4797% ( 1) 00:10:26.937 18.385 - 18.502: 84.4914% ( 1) 00:10:26.937 18.618 - 18.735: 84.5031% ( 1) 00:10:26.937 18.735 - 18.851: 84.5148% ( 1) 00:10:26.937 18.851 - 18.967: 84.5265% ( 1) 00:10:26.937 18.967 - 19.084: 84.5381% ( 1) 00:10:26.937 19.084 - 19.200: 84.5732% ( 3) 00:10:26.937 19.200 - 19.316: 84.6199% ( 4) 00:10:26.937 19.316 - 19.433: 84.6549% ( 3) 00:10:26.937 19.433 - 19.549: 84.7250% ( 6) 00:10:26.937 19.549 - 19.665: 84.8301% ( 9) 00:10:26.937 19.665 - 19.782: 84.9118% ( 7) 00:10:26.937 19.782 - 19.898: 85.0753% ( 14) 00:10:26.937 19.898 - 20.015: 85.2388% ( 14) 00:10:26.937 20.015 - 20.131: 85.4490% ( 18) 00:10:26.937 20.131 - 20.247: 85.6242% ( 15) 00:10:26.937 20.247 - 20.364: 85.7760% ( 13) 00:10:26.937 20.364 - 20.480: 85.8694% ( 8) 00:10:26.937 20.480 - 20.596: 86.0563% ( 16) 00:10:26.937 20.596 - 20.713: 86.1731% ( 10) 00:10:26.937 20.713 - 20.829: 86.2431% ( 6) 00:10:26.937 20.829 - 20.945: 86.3833% ( 12) 00:10:26.937 20.945 - 21.062: 86.4417% ( 5) 00:10:26.937 21.062 - 21.178: 86.4767% ( 3) 00:10:26.937 21.178 - 21.295: 86.5584% ( 7) 00:10:26.937 21.295 - 21.411: 86.5935% ( 3) 00:10:26.937 21.411 - 21.527: 86.6519% ( 5) 00:10:26.937 21.527 - 21.644: 86.6986% ( 4) 00:10:26.937 21.644 - 21.760: 86.7219% ( 2) 00:10:26.937 21.760 - 21.876: 86.7570% ( 3) 00:10:26.937 21.876 - 21.993: 86.8504% ( 8) 00:10:26.937 21.993 - 22.109: 86.9438% ( 8) 00:10:26.937 22.109 - 22.225: 87.0373% ( 8) 00:10:26.937 22.225 - 22.342: 87.1190% ( 7) 00:10:26.937 22.342 - 22.458: 87.1307% ( 1) 00:10:26.937 22.458 - 22.575: 87.2124% ( 7) 00:10:26.937 22.575 - 22.691: 87.2358% ( 2) 00:10:26.937 22.691 - 22.807: 87.2708% ( 3) 00:10:26.937 22.807 - 22.924: 87.3059% ( 3) 00:10:26.937 22.924 - 23.040: 87.3526% ( 4) 00:10:26.937 23.040 - 23.156: 87.3876% ( 3) 00:10:26.937 23.389 - 23.505: 87.4226% ( 3) 00:10:26.937 23.505 - 23.622: 87.4460% ( 2) 00:10:26.937 23.855 - 23.971: 87.4693% ( 2) 00:10:26.937 23.971 - 24.087: 87.4810% ( 1) 00:10:26.937 24.087 - 24.204: 87.4927% ( 1) 00:10:26.937 24.553 - 24.669: 87.5044% ( 1) 00:10:26.937 24.669 - 24.785: 87.5394% ( 3) 00:10:26.937 24.785 - 24.902: 87.5744% ( 3) 00:10:26.937 24.902 - 25.018: 87.6212% ( 4) 00:10:26.937 25.018 - 25.135: 87.6445% ( 2) 00:10:26.937 25.135 - 25.251: 87.6679% ( 2) 00:10:26.937 25.251 - 25.367: 87.6912% ( 2) 00:10:26.937 25.484 - 25.600: 87.7146% ( 2) 00:10:26.937 25.833 - 25.949: 87.7379% ( 2) 00:10:26.937 26.065 - 26.182: 87.7496% ( 1) 00:10:26.937 26.182 - 26.298: 87.7613% ( 1) 00:10:26.937 26.415 - 26.531: 87.7847% ( 2) 00:10:26.937 26.880 - 26.996: 87.7963% ( 1) 00:10:26.937 27.229 - 27.345: 87.8080% ( 1) 00:10:26.937 27.578 - 27.695: 87.8314% ( 2) 00:10:26.937 27.695 - 27.811: 87.8781% ( 4) 00:10:26.937 27.811 - 27.927: 87.9949% ( 10) 00:10:26.937 27.927 - 28.044: 88.2635% ( 23) 00:10:26.937 28.044 - 28.160: 88.8240% ( 48) 00:10:26.937 28.160 - 28.276: 89.7232% ( 77) 00:10:26.937 28.276 - 28.393: 90.9961% ( 109) 00:10:26.937 28.393 - 28.509: 92.3625% ( 117) 00:10:26.937 28.509 - 28.625: 93.6588% ( 111) 00:10:26.937 28.625 - 28.742: 94.5229% ( 74) 00:10:26.937 28.742 - 28.858: 95.2470% ( 62) 00:10:26.937 28.858 - 28.975: 95.5857% ( 29) 00:10:26.937 28.975 - 29.091: 96.0294% ( 38) 00:10:26.937 29.091 - 29.207: 96.4031% ( 32) 00:10:26.937 29.207 - 29.324: 96.7301% ( 28) 00:10:26.937 29.324 - 29.440: 97.0805% ( 30) 00:10:26.937 29.440 - 29.556: 97.2673% ( 16) 00:10:26.937 29.556 - 29.673: 97.4775% ( 18) 00:10:26.937 29.673 - 29.789: 97.6060% ( 11) 00:10:26.937 29.789 - 30.022: 97.8162% ( 18) 00:10:26.937 30.022 - 30.255: 97.9213% ( 9) 00:10:26.937 30.255 - 30.487: 98.0614% ( 12) 00:10:26.937 30.487 - 30.720: 98.1198% ( 5) 00:10:26.937 30.720 - 30.953: 98.1665% ( 4) 00:10:26.937 30.953 - 31.185: 98.2132% ( 4) 00:10:26.937 31.185 - 31.418: 98.2600% ( 4) 00:10:26.937 31.418 - 31.651: 98.2833% ( 2) 00:10:26.937 31.651 - 31.884: 98.3067% ( 2) 00:10:26.937 31.884 - 32.116: 98.3300% ( 2) 00:10:26.937 32.116 - 32.349: 98.3417% ( 1) 00:10:26.937 32.582 - 32.815: 98.3534% ( 1) 00:10:26.937 32.815 - 33.047: 98.3884% ( 3) 00:10:26.937 33.047 - 33.280: 98.4118% ( 2) 00:10:26.937 33.280 - 33.513: 98.4351% ( 2) 00:10:26.937 33.513 - 33.745: 98.4585% ( 2) 00:10:26.937 33.745 - 33.978: 98.4702% ( 1) 00:10:26.937 33.978 - 34.211: 98.5169% ( 4) 00:10:26.937 34.211 - 34.444: 98.5986% ( 7) 00:10:26.937 34.444 - 34.676: 98.6570% ( 5) 00:10:26.937 34.676 - 34.909: 98.7738% ( 10) 00:10:26.937 34.909 - 35.142: 98.8205% ( 4) 00:10:26.937 35.142 - 35.375: 98.8672% ( 4) 00:10:26.937 35.375 - 35.607: 98.9256% ( 5) 00:10:26.937 35.607 - 35.840: 98.9723% ( 4) 00:10:26.937 35.840 - 36.073: 99.0074% ( 3) 00:10:26.937 36.073 - 36.305: 99.0541% ( 4) 00:10:26.937 36.305 - 36.538: 99.1241% ( 6) 00:10:26.937 36.538 - 36.771: 99.1825% ( 5) 00:10:26.937 36.771 - 37.004: 99.1942% ( 1) 00:10:26.937 37.236 - 37.469: 99.2176% ( 2) 00:10:26.937 37.469 - 37.702: 99.2526% ( 3) 00:10:26.937 37.702 - 37.935: 99.2760% ( 2) 00:10:26.937 37.935 - 38.167: 99.3110% ( 3) 00:10:26.937 38.167 - 38.400: 99.3227% ( 1) 00:10:26.937 38.400 - 38.633: 99.3343% ( 1) 00:10:26.937 38.633 - 38.865: 99.3460% ( 1) 00:10:26.937 39.098 - 39.331: 99.3577% ( 1) 00:10:26.937 39.564 - 39.796: 99.3694% ( 1) 00:10:26.937 39.796 - 40.029: 99.3811% ( 1) 00:10:26.937 41.193 - 41.425: 99.4044% ( 2) 00:10:26.937 41.658 - 41.891: 99.4161% ( 1) 00:10:26.937 42.356 - 42.589: 99.4278% ( 1) 00:10:26.937 42.589 - 42.822: 99.4394% ( 1) 00:10:26.937 42.822 - 43.055: 99.4511% ( 1) 00:10:26.937 43.055 - 43.287: 99.4628% ( 1) 00:10:26.937 43.287 - 43.520: 99.4862% ( 2) 00:10:26.937 43.520 - 43.753: 99.5212% ( 3) 00:10:26.937 43.753 - 43.985: 99.5796% ( 5) 00:10:26.937 43.985 - 44.218: 99.6263% ( 4) 00:10:26.937 44.218 - 44.451: 99.6497% ( 2) 00:10:26.937 44.451 - 44.684: 99.6613% ( 1) 00:10:26.937 45.149 - 45.382: 99.6847% ( 2) 00:10:26.937 46.313 - 46.545: 99.6964% ( 1) 00:10:26.937 47.011 - 47.244: 99.7080% ( 1) 00:10:26.937 47.709 - 47.942: 99.7431% ( 3) 00:10:26.937 48.873 - 49.105: 99.7664% ( 2) 00:10:26.937 49.105 - 49.338: 99.7781% ( 1) 00:10:26.937 49.571 - 49.804: 99.7898% ( 1) 00:10:26.937 49.804 - 50.036: 99.8015% ( 1) 00:10:26.937 50.502 - 50.735: 99.8131% ( 1) 00:10:26.937 51.433 - 51.665: 99.8365% ( 2) 00:10:26.937 51.665 - 51.898: 99.8482% ( 1) 00:10:26.937 51.898 - 52.131: 99.8599% ( 1) 00:10:26.937 52.131 - 52.364: 99.8715% ( 1) 00:10:26.937 52.364 - 52.596: 99.8949% ( 2) 00:10:26.937 52.596 - 52.829: 99.9066% ( 1) 00:10:26.937 53.993 - 54.225: 99.9183% ( 1) 00:10:26.937 54.225 - 54.458: 99.9299% ( 1) 00:10:26.937 57.716 - 57.949: 99.9416% ( 1) 00:10:26.937 59.578 - 60.044: 99.9533% ( 1) 00:10:26.938 60.509 - 60.975: 99.9650% ( 1) 00:10:26.938 82.385 - 82.851: 99.9766% ( 1) 00:10:26.938 87.971 - 88.436: 99.9883% ( 1) 00:10:26.938 92.625 - 93.091: 100.0000% ( 1) 00:10:26.938 00:10:26.938 Complete histogram 00:10:26.938 ================== 00:10:26.938 Range in us Cumulative Count 00:10:26.938 8.204 - 8.262: 0.0234% ( 2) 00:10:26.938 8.262 - 8.320: 0.2452% ( 19) 00:10:26.938 8.320 - 8.378: 0.8759% ( 54) 00:10:26.938 8.378 - 8.436: 1.5766% ( 60) 00:10:26.938 8.436 - 8.495: 2.6743% ( 94) 00:10:26.938 8.495 - 8.553: 4.2508% ( 135) 00:10:26.938 8.553 - 8.611: 6.1194% ( 160) 00:10:26.938 8.611 - 8.669: 9.7513% ( 311) 00:10:26.938 8.669 - 8.727: 15.6020% ( 501) 00:10:26.938 8.727 - 8.785: 22.4804% ( 589) 00:10:26.938 8.785 - 8.844: 29.6275% ( 612) 00:10:26.938 8.844 - 8.902: 36.5526% ( 593) 00:10:26.938 8.902 - 8.960: 42.5201% ( 511) 00:10:26.938 8.960 - 9.018: 47.5651% ( 432) 00:10:26.938 9.018 - 9.076: 51.8627% ( 368) 00:10:26.938 9.076 - 9.135: 55.4829% ( 310) 00:10:26.938 9.135 - 9.193: 58.0404% ( 219) 00:10:26.938 9.193 - 9.251: 60.0374% ( 171) 00:10:26.938 9.251 - 9.309: 61.5555% ( 130) 00:10:26.938 9.309 - 9.367: 62.7934% ( 106) 00:10:26.938 9.367 - 9.425: 63.7744% ( 84) 00:10:26.938 9.425 - 9.484: 64.5451% ( 66) 00:10:26.938 9.484 - 9.542: 65.7013% ( 99) 00:10:26.938 9.542 - 9.600: 66.9859% ( 110) 00:10:26.938 9.600 - 9.658: 68.8193% ( 157) 00:10:26.938 9.658 - 9.716: 69.9988% ( 101) 00:10:26.938 9.716 - 9.775: 71.1666% ( 100) 00:10:26.938 9.775 - 9.833: 72.1826% ( 87) 00:10:26.938 9.833 - 9.891: 73.0118% ( 71) 00:10:26.938 9.891 - 9.949: 74.1796% ( 100) 00:10:26.938 9.949 - 10.007: 75.0905% ( 78) 00:10:26.938 10.007 - 10.065: 75.9430% ( 73) 00:10:26.938 10.065 - 10.124: 76.6203% ( 58) 00:10:26.938 10.124 - 10.182: 77.0991% ( 41) 00:10:26.938 10.182 - 10.240: 77.4612% ( 31) 00:10:26.938 10.240 - 10.298: 77.7414% ( 24) 00:10:26.938 10.298 - 10.356: 78.0684% ( 28) 00:10:26.938 10.356 - 10.415: 78.3020% ( 20) 00:10:26.938 10.415 - 10.473: 78.5356% ( 20) 00:10:26.938 10.473 - 10.531: 78.8275% ( 25) 00:10:26.938 10.531 - 10.589: 79.1662% ( 29) 00:10:26.938 10.589 - 10.647: 79.4465% ( 24) 00:10:26.938 10.647 - 10.705: 79.6450% ( 17) 00:10:26.938 10.705 - 10.764: 79.9369% ( 25) 00:10:26.938 10.764 - 10.822: 80.0420% ( 9) 00:10:26.938 10.822 - 10.880: 80.2055% ( 14) 00:10:26.938 10.880 - 10.938: 80.3574% ( 13) 00:10:26.938 10.938 - 10.996: 80.5325% ( 15) 00:10:26.938 10.996 - 11.055: 80.7544% ( 19) 00:10:26.938 11.055 - 11.113: 80.9646% ( 18) 00:10:26.938 11.113 - 11.171: 81.2099% ( 21) 00:10:26.938 11.171 - 11.229: 81.3850% ( 15) 00:10:26.938 11.229 - 11.287: 81.7354% ( 30) 00:10:26.938 11.287 - 11.345: 82.0740% ( 29) 00:10:26.938 11.345 - 11.404: 82.3076% ( 20) 00:10:26.938 11.404 - 11.462: 82.5762% ( 23) 00:10:26.938 11.462 - 11.520: 82.8565% ( 24) 00:10:26.938 11.520 - 11.578: 83.1951% ( 29) 00:10:26.938 11.578 - 11.636: 83.4404% ( 21) 00:10:26.938 11.636 - 11.695: 83.7090% ( 23) 00:10:26.938 11.695 - 11.753: 83.9192% ( 18) 00:10:26.938 11.753 - 11.811: 84.0944% ( 15) 00:10:26.938 11.811 - 11.869: 84.1294% ( 3) 00:10:26.938 11.869 - 11.927: 84.1878% ( 5) 00:10:26.938 11.927 - 11.985: 84.2695% ( 7) 00:10:26.938 11.985 - 12.044: 84.3746% ( 9) 00:10:26.938 12.044 - 12.102: 84.4447% ( 6) 00:10:26.938 12.102 - 12.160: 84.4914% ( 4) 00:10:26.938 12.160 - 12.218: 84.5148% ( 2) 00:10:26.938 12.218 - 12.276: 84.5965% ( 7) 00:10:26.938 12.276 - 12.335: 84.6199% ( 2) 00:10:26.938 12.335 - 12.393: 84.6549% ( 3) 00:10:26.938 12.393 - 12.451: 84.6899% ( 3) 00:10:26.938 12.451 - 12.509: 84.7483% ( 5) 00:10:26.938 12.509 - 12.567: 84.7950% ( 4) 00:10:26.938 12.567 - 12.625: 84.8301% ( 3) 00:10:26.938 12.625 - 12.684: 84.8534% ( 2) 00:10:26.938 12.684 - 12.742: 84.9002% ( 4) 00:10:26.938 12.800 - 12.858: 84.9352% ( 3) 00:10:26.938 12.858 - 12.916: 84.9585% ( 2) 00:10:26.938 13.033 - 13.091: 84.9702% ( 1) 00:10:26.938 13.091 - 13.149: 84.9936% ( 2) 00:10:26.938 13.149 - 13.207: 85.0053% ( 1) 00:10:26.938 13.440 - 13.498: 85.0169% ( 1) 00:10:26.938 13.498 - 13.556: 85.0286% ( 1) 00:10:26.938 13.556 - 13.615: 85.0636% ( 3) 00:10:26.938 13.731 - 13.789: 85.0870% ( 2) 00:10:26.938 13.905 - 13.964: 85.0987% ( 1) 00:10:26.938 14.255 - 14.313: 85.1104% ( 1) 00:10:26.938 14.371 - 14.429: 85.1220% ( 1) 00:10:26.938 14.429 - 14.487: 85.1454% ( 2) 00:10:26.938 14.662 - 14.720: 85.1571% ( 1) 00:10:26.938 14.836 - 14.895: 85.1921% ( 3) 00:10:26.938 14.895 - 15.011: 85.2739% ( 7) 00:10:26.938 15.011 - 15.127: 85.3556% ( 7) 00:10:26.938 15.127 - 15.244: 85.4257% ( 6) 00:10:26.938 15.244 - 15.360: 85.5074% ( 7) 00:10:26.938 15.360 - 15.476: 85.6709% ( 14) 00:10:26.938 15.476 - 15.593: 85.7760% ( 9) 00:10:26.938 15.593 - 15.709: 85.9278% ( 13) 00:10:26.938 15.709 - 15.825: 86.0096% ( 7) 00:10:26.938 15.825 - 15.942: 86.1147% ( 9) 00:10:26.938 15.942 - 16.058: 86.2431% ( 11) 00:10:26.938 16.058 - 16.175: 86.3833% ( 12) 00:10:26.938 16.175 - 16.291: 86.4767% ( 8) 00:10:26.938 16.291 - 16.407: 86.5935% ( 10) 00:10:26.938 16.407 - 16.524: 86.6869% ( 8) 00:10:26.938 16.524 - 16.640: 86.7219% ( 3) 00:10:26.938 16.640 - 16.756: 86.7803% ( 5) 00:10:26.938 16.756 - 16.873: 86.7920% ( 1) 00:10:26.938 16.873 - 16.989: 86.8037% ( 1) 00:10:26.938 16.989 - 17.105: 86.8270% ( 2) 00:10:26.938 17.105 - 17.222: 86.8621% ( 3) 00:10:26.938 17.222 - 17.338: 86.9438% ( 7) 00:10:26.938 17.338 - 17.455: 86.9789% ( 3) 00:10:26.938 17.455 - 17.571: 87.0489% ( 6) 00:10:26.938 17.687 - 17.804: 87.1073% ( 5) 00:10:26.938 17.804 - 17.920: 87.1424% ( 3) 00:10:26.938 18.036 - 18.153: 87.1540% ( 1) 00:10:26.938 18.153 - 18.269: 87.2124% ( 5) 00:10:26.938 18.269 - 18.385: 87.2241% ( 1) 00:10:26.938 18.385 - 18.502: 87.2358% ( 1) 00:10:26.938 18.502 - 18.618: 87.2475% ( 1) 00:10:26.938 18.618 - 18.735: 87.2825% ( 3) 00:10:26.938 18.735 - 18.851: 87.2942% ( 1) 00:10:26.938 18.967 - 19.084: 87.3175% ( 2) 00:10:26.938 19.084 - 19.200: 87.3409% ( 2) 00:10:26.938 19.665 - 19.782: 87.3526% ( 1) 00:10:26.938 19.782 - 19.898: 87.3993% ( 4) 00:10:26.938 19.898 - 20.015: 87.4226% ( 2) 00:10:26.938 20.015 - 20.131: 87.4693% ( 4) 00:10:26.938 20.131 - 20.247: 87.4810% ( 1) 00:10:26.938 20.247 - 20.364: 87.4927% ( 1) 00:10:26.938 20.364 - 20.480: 87.5277% ( 3) 00:10:26.938 20.480 - 20.596: 87.5394% ( 1) 00:10:26.938 20.596 - 20.713: 87.5861% ( 4) 00:10:26.938 20.713 - 20.829: 87.6095% ( 2) 00:10:26.938 20.829 - 20.945: 87.6562% ( 4) 00:10:26.938 21.295 - 21.411: 87.6679% ( 1) 00:10:26.938 21.527 - 21.644: 87.6796% ( 1) 00:10:26.938 21.760 - 21.876: 87.6912% ( 1) 00:10:26.938 22.225 - 22.342: 87.7029% ( 1) 00:10:26.938 22.458 - 22.575: 87.7146% ( 1) 00:10:26.938 22.575 - 22.691: 87.7263% ( 1) 00:10:26.938 22.691 - 22.807: 87.7496% ( 2) 00:10:26.938 22.807 - 22.924: 87.7613% ( 1) 00:10:26.938 22.924 - 23.040: 87.8430% ( 7) 00:10:26.938 23.040 - 23.156: 87.9949% ( 13) 00:10:26.938 23.156 - 23.273: 88.3569% ( 31) 00:10:26.938 23.273 - 23.389: 89.1860% ( 71) 00:10:26.938 23.389 - 23.505: 90.2254% ( 89) 00:10:26.938 23.505 - 23.622: 91.4399% ( 104) 00:10:26.938 23.622 - 23.738: 92.9814% ( 132) 00:10:26.938 23.738 - 23.855: 94.0208% ( 89) 00:10:26.938 23.855 - 23.971: 94.9083% ( 76) 00:10:26.938 23.971 - 24.087: 95.5740% ( 57) 00:10:26.938 24.087 - 24.204: 96.0878% ( 44) 00:10:26.938 24.204 - 24.320: 96.5433% ( 39) 00:10:26.938 24.320 - 24.436: 96.8703% ( 28) 00:10:26.938 24.436 - 24.553: 97.1389% ( 23) 00:10:26.938 24.553 - 24.669: 97.3374% ( 17) 00:10:26.938 24.669 - 24.785: 97.5709% ( 20) 00:10:26.938 24.785 - 24.902: 97.7228% ( 13) 00:10:26.938 24.902 - 25.018: 97.8746% ( 13) 00:10:26.938 25.018 - 25.135: 97.9213% ( 4) 00:10:26.938 25.135 - 25.251: 98.0264% ( 9) 00:10:26.938 25.251 - 25.367: 98.0848% ( 5) 00:10:26.938 25.367 - 25.484: 98.1432% ( 5) 00:10:26.938 25.484 - 25.600: 98.1665% ( 2) 00:10:26.938 25.600 - 25.716: 98.1782% ( 1) 00:10:26.938 25.716 - 25.833: 98.1899% ( 1) 00:10:26.938 25.833 - 25.949: 98.2132% ( 2) 00:10:26.938 25.949 - 26.065: 98.2716% ( 5) 00:10:26.938 26.065 - 26.182: 98.3534% ( 7) 00:10:26.938 26.182 - 26.298: 98.3767% ( 2) 00:10:26.938 26.298 - 26.415: 98.3884% ( 1) 00:10:26.938 26.415 - 26.531: 98.4001% ( 1) 00:10:26.938 26.647 - 26.764: 98.4351% ( 3) 00:10:26.938 26.764 - 26.880: 98.4935% ( 5) 00:10:26.938 26.880 - 26.996: 98.5169% ( 2) 00:10:26.938 26.996 - 27.113: 98.5286% ( 1) 00:10:26.939 27.113 - 27.229: 98.5402% ( 1) 00:10:26.939 27.229 - 27.345: 98.5519% ( 1) 00:10:26.939 27.462 - 27.578: 98.5636% ( 1) 00:10:26.939 27.578 - 27.695: 98.5753% ( 1) 00:10:26.939 27.927 - 28.044: 98.5869% ( 1) 00:10:26.939 28.160 - 28.276: 98.5986% ( 1) 00:10:26.939 28.393 - 28.509: 98.6220% ( 2) 00:10:26.939 28.625 - 28.742: 98.6337% ( 1) 00:10:26.939 28.975 - 29.091: 98.6453% ( 1) 00:10:26.939 29.091 - 29.207: 98.6920% ( 4) 00:10:26.939 29.207 - 29.324: 98.7154% ( 2) 00:10:26.939 29.324 - 29.440: 98.7271% ( 1) 00:10:26.939 29.440 - 29.556: 98.7621% ( 3) 00:10:26.939 29.789 - 30.022: 98.8906% ( 11) 00:10:26.939 30.022 - 30.255: 98.9606% ( 6) 00:10:26.939 30.255 - 30.487: 99.0424% ( 7) 00:10:26.939 30.487 - 30.720: 99.1241% ( 7) 00:10:26.939 30.720 - 30.953: 99.2059% ( 7) 00:10:26.939 30.953 - 31.185: 99.3460% ( 12) 00:10:26.939 31.185 - 31.418: 99.3927% ( 4) 00:10:26.939 31.418 - 31.651: 99.4161% ( 2) 00:10:26.939 31.651 - 31.884: 99.4394% ( 2) 00:10:26.939 31.884 - 32.116: 99.4628% ( 2) 00:10:26.939 32.116 - 32.349: 99.5095% ( 4) 00:10:26.939 32.349 - 32.582: 99.5212% ( 1) 00:10:26.939 32.582 - 32.815: 99.5446% ( 2) 00:10:26.939 33.047 - 33.280: 99.5562% ( 1) 00:10:26.939 33.280 - 33.513: 99.5913% ( 3) 00:10:26.939 33.978 - 34.211: 99.6146% ( 2) 00:10:26.939 34.211 - 34.444: 99.6263% ( 1) 00:10:26.939 35.142 - 35.375: 99.6730% ( 4) 00:10:26.939 36.305 - 36.538: 99.6847% ( 1) 00:10:26.939 36.771 - 37.004: 99.6964% ( 1) 00:10:26.939 38.167 - 38.400: 99.7197% ( 2) 00:10:26.939 38.400 - 38.633: 99.7431% ( 2) 00:10:26.939 38.633 - 38.865: 99.7664% ( 2) 00:10:26.939 38.865 - 39.098: 99.7781% ( 1) 00:10:26.939 39.098 - 39.331: 99.8015% ( 2) 00:10:26.939 39.796 - 40.029: 99.8131% ( 1) 00:10:26.939 40.262 - 40.495: 99.8248% ( 1) 00:10:26.939 40.727 - 40.960: 99.8365% ( 1) 00:10:26.939 41.425 - 41.658: 99.8482% ( 1) 00:10:26.939 41.658 - 41.891: 99.8715% ( 2) 00:10:26.939 43.055 - 43.287: 99.8832% ( 1) 00:10:26.939 43.287 - 43.520: 99.8949% ( 1) 00:10:26.939 46.778 - 47.011: 99.9066% ( 1) 00:10:26.939 47.011 - 47.244: 99.9183% ( 1) 00:10:26.939 47.476 - 47.709: 99.9299% ( 1) 00:10:26.939 48.407 - 48.640: 99.9416% ( 1) 00:10:26.939 49.571 - 49.804: 99.9533% ( 1) 00:10:26.939 54.924 - 55.156: 99.9650% ( 1) 00:10:26.939 73.542 - 74.007: 99.9766% ( 1) 00:10:26.939 110.778 - 111.244: 99.9883% ( 1) 00:10:26.939 193.629 - 194.560: 100.0000% ( 1) 00:10:26.939 00:10:26.939 00:10:26.939 real 0m1.307s 00:10:26.939 user 0m1.115s 00:10:26.939 sys 0m0.137s 00:10:26.939 20:56:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:26.939 ************************************ 00:10:26.939 END TEST nvme_overhead 00:10:26.939 ************************************ 00:10:26.939 20:56:47 -- common/autotest_common.sh@10 -- # set +x 00:10:26.939 20:56:47 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:26.939 20:56:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:26.939 20:56:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:26.939 20:56:47 -- common/autotest_common.sh@10 -- # set +x 00:10:26.939 ************************************ 00:10:26.939 START TEST nvme_arbitration 00:10:26.939 ************************************ 00:10:26.939 20:56:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:30.294 Initializing NVMe Controllers 00:10:30.294 Attached to 0000:00:06.0 00:10:30.294 Attached to 0000:00:07.0 00:10:30.294 Attached to 0000:00:09.0 00:10:30.294 Attached to 0000:00:08.0 00:10:30.294 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:30.294 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:30.294 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:30.294 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:30.294 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:30.294 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:30.294 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:30.294 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:30.294 Initialization complete. Launching workers. 00:10:30.294 Starting thread on core 1 with urgent priority queue 00:10:30.294 Starting thread on core 2 with urgent priority queue 00:10:30.294 Starting thread on core 3 with urgent priority queue 00:10:30.294 Starting thread on core 0 with urgent priority queue 00:10:30.294 QEMU NVMe Ctrl (12340 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:10:30.294 QEMU NVMe Ctrl (12342 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:10:30.294 QEMU NVMe Ctrl (12341 ) core 1: 704.00 IO/s 142.05 secs/100000 ios 00:10:30.294 QEMU NVMe Ctrl (12342 ) core 1: 704.00 IO/s 142.05 secs/100000 ios 00:10:30.294 QEMU NVMe Ctrl (12343 ) core 2: 618.67 IO/s 161.64 secs/100000 ios 00:10:30.294 QEMU NVMe Ctrl (12342 ) core 3: 704.00 IO/s 142.05 secs/100000 ios 00:10:30.294 ======================================================== 00:10:30.294 00:10:30.294 ************************************ 00:10:30.294 END TEST nvme_arbitration 00:10:30.294 ************************************ 00:10:30.294 00:10:30.294 real 0m3.517s 00:10:30.294 user 0m9.567s 00:10:30.294 sys 0m0.154s 00:10:30.294 20:56:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:30.294 20:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:30.294 20:56:51 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:10:30.294 20:56:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:30.294 20:56:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.294 20:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:30.294 ************************************ 00:10:30.294 START TEST nvme_single_aen 00:10:30.294 ************************************ 00:10:30.294 20:56:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:10:30.553 [2024-12-08 20:56:51.346534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:30.553 [2024-12-08 20:56:51.346628] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.553 [2024-12-08 20:56:51.550348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:30.553 [2024-12-08 20:56:51.552020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:30.553 [2024-12-08 20:56:51.553379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:30.553 [2024-12-08 20:56:51.554699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:30.553 Asynchronous Event Request test 00:10:30.553 Attached to 0000:00:06.0 00:10:30.553 Attached to 0000:00:07.0 00:10:30.553 Attached to 0000:00:09.0 00:10:30.553 Attached to 0000:00:08.0 00:10:30.553 Reset controller to setup AER completions for this process 00:10:30.553 Registering asynchronous event callbacks... 00:10:30.553 Getting orig temperature thresholds of all controllers 00:10:30.553 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.553 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.553 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.553 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.553 Setting all controllers temperature threshold low to trigger AER 00:10:30.553 Waiting for all controllers temperature threshold to be set lower 00:10:30.553 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.553 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:30.553 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.553 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:30.553 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.553 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:30.553 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.553 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:30.553 Waiting for all controllers to trigger AER and reset threshold 00:10:30.553 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.553 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.553 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.553 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.553 Cleaning up... 00:10:30.553 00:10:30.553 real 0m0.294s 00:10:30.553 user 0m0.109s 00:10:30.553 sys 0m0.136s 00:10:30.553 20:56:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:30.553 20:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:30.553 ************************************ 00:10:30.553 END TEST nvme_single_aen 00:10:30.553 ************************************ 00:10:30.812 20:56:51 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:30.812 20:56:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:30.812 20:56:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.812 20:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:30.812 ************************************ 00:10:30.812 START TEST nvme_doorbell_aers 00:10:30.812 ************************************ 00:10:30.812 20:56:51 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:10:30.812 20:56:51 -- nvme/nvme.sh@70 -- # bdfs=() 00:10:30.812 20:56:51 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:30.812 20:56:51 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:30.812 20:56:51 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:30.812 20:56:51 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:30.812 20:56:51 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:30.812 20:56:51 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:30.812 20:56:51 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:30.812 20:56:51 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:30.812 20:56:51 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:30.812 20:56:51 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:30.812 20:56:51 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:30.812 20:56:51 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:31.070 [2024-12-08 20:56:51.933956] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:10:41.040 Executing: test_write_invalid_db 00:10:41.040 Waiting for AER completion... 00:10:41.040 Failure: test_write_invalid_db 00:10:41.040 00:10:41.040 Executing: test_invalid_db_write_overflow_sq 00:10:41.040 Waiting for AER completion... 00:10:41.040 Failure: test_invalid_db_write_overflow_sq 00:10:41.040 00:10:41.040 Executing: test_invalid_db_write_overflow_cq 00:10:41.040 Waiting for AER completion... 00:10:41.040 Failure: test_invalid_db_write_overflow_cq 00:10:41.040 00:10:41.040 20:57:01 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:41.040 20:57:01 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:41.040 [2024-12-08 20:57:02.029759] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:10:51.011 Executing: test_write_invalid_db 00:10:51.011 Waiting for AER completion... 00:10:51.011 Failure: test_write_invalid_db 00:10:51.011 00:10:51.011 Executing: test_invalid_db_write_overflow_sq 00:10:51.011 Waiting for AER completion... 00:10:51.011 Failure: test_invalid_db_write_overflow_sq 00:10:51.011 00:10:51.011 Executing: test_invalid_db_write_overflow_cq 00:10:51.011 Waiting for AER completion... 00:10:51.011 Failure: test_invalid_db_write_overflow_cq 00:10:51.011 00:10:51.011 20:57:11 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:51.011 20:57:11 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:51.270 [2024-12-08 20:57:12.061717] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:01.246 Executing: test_write_invalid_db 00:11:01.246 Waiting for AER completion... 00:11:01.246 Failure: test_write_invalid_db 00:11:01.246 00:11:01.246 Executing: test_invalid_db_write_overflow_sq 00:11:01.246 Waiting for AER completion... 00:11:01.246 Failure: test_invalid_db_write_overflow_sq 00:11:01.246 00:11:01.246 Executing: test_invalid_db_write_overflow_cq 00:11:01.246 Waiting for AER completion... 00:11:01.246 Failure: test_invalid_db_write_overflow_cq 00:11:01.246 00:11:01.246 20:57:21 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:01.246 20:57:21 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:01.246 [2024-12-08 20:57:22.100844] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.272 Executing: test_write_invalid_db 00:11:11.272 Waiting for AER completion... 00:11:11.272 Failure: test_write_invalid_db 00:11:11.272 00:11:11.272 Executing: test_invalid_db_write_overflow_sq 00:11:11.272 Waiting for AER completion... 00:11:11.272 Failure: test_invalid_db_write_overflow_sq 00:11:11.272 00:11:11.272 Executing: test_invalid_db_write_overflow_cq 00:11:11.272 Waiting for AER completion... 00:11:11.272 Failure: test_invalid_db_write_overflow_cq 00:11:11.272 00:11:11.272 00:11:11.272 real 0m40.225s 00:11:11.272 user 0m34.114s 00:11:11.272 sys 0m5.753s 00:11:11.272 ************************************ 00:11:11.272 END TEST nvme_doorbell_aers 00:11:11.272 ************************************ 00:11:11.272 20:57:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:11.272 20:57:31 -- common/autotest_common.sh@10 -- # set +x 00:11:11.272 20:57:31 -- nvme/nvme.sh@97 -- # uname 00:11:11.273 20:57:31 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:11.273 20:57:31 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:11:11.273 20:57:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:11:11.273 20:57:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:11.273 20:57:31 -- common/autotest_common.sh@10 -- # set +x 00:11:11.273 ************************************ 00:11:11.273 START TEST nvme_multi_aen 00:11:11.273 ************************************ 00:11:11.273 20:57:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:11:11.273 [2024-12-08 20:57:31.984559] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:11.273 [2024-12-08 20:57:31.985219] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.273 [2024-12-08 20:57:32.190821] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:11.273 [2024-12-08 20:57:32.190901] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.190961] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.190981] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.192544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:11:11.273 [2024-12-08 20:57:32.192581] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.192637] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.192659] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.194129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:11:11.273 [2024-12-08 20:57:32.194164] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.194194] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.194212] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.195681] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:11:11.273 [2024-12-08 20:57:32.195714] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.195739] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.195772] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64418) is not found. Dropping the request. 00:11:11.273 [2024-12-08 20:57:32.206053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:11.273 Child process pid: 64938 00:11:11.273 [2024-12-08 20:57:32.206265] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.533 [Child] Asynchronous Event Request test 00:11:11.533 [Child] Attached to 0000:00:06.0 00:11:11.533 [Child] Attached to 0000:00:07.0 00:11:11.533 [Child] Attached to 0000:00:09.0 00:11:11.533 [Child] Attached to 0000:00:08.0 00:11:11.533 [Child] Registering asynchronous event callbacks... 00:11:11.533 [Child] Getting orig temperature thresholds of all controllers 00:11:11.533 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.533 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.533 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.533 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.533 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:11.533 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.533 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.533 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.533 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.533 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.533 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.533 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.533 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.533 [Child] Cleaning up... 00:11:11.533 Asynchronous Event Request test 00:11:11.533 Attached to 0000:00:06.0 00:11:11.533 Attached to 0000:00:07.0 00:11:11.533 Attached to 0000:00:09.0 00:11:11.533 Attached to 0000:00:08.0 00:11:11.533 Reset controller to setup AER completions for this process 00:11:11.533 Registering asynchronous event callbacks... 00:11:11.533 Getting orig temperature thresholds of all controllers 00:11:11.533 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.533 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.533 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.533 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.533 Setting all controllers temperature threshold low to trigger AER 00:11:11.533 Waiting for all controllers temperature threshold to be set lower 00:11:11.533 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.533 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:11.533 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.533 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:11:11.533 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.533 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:11:11.533 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.533 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:11:11.533 Waiting for all controllers to trigger AER and reset threshold 00:11:11.533 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.533 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.533 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.533 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.533 Cleaning up... 00:11:11.533 00:11:11.533 real 0m0.601s 00:11:11.533 user 0m0.195s 00:11:11.533 sys 0m0.285s 00:11:11.533 20:57:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:11.533 ************************************ 00:11:11.533 END TEST nvme_multi_aen 00:11:11.533 ************************************ 00:11:11.533 20:57:32 -- common/autotest_common.sh@10 -- # set +x 00:11:11.533 20:57:32 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:11.533 20:57:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:11.533 20:57:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:11.533 20:57:32 -- common/autotest_common.sh@10 -- # set +x 00:11:11.793 ************************************ 00:11:11.793 START TEST nvme_startup 00:11:11.793 ************************************ 00:11:11.793 20:57:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:12.054 Initializing NVMe Controllers 00:11:12.054 Attached to 0000:00:06.0 00:11:12.054 Attached to 0000:00:07.0 00:11:12.054 Attached to 0000:00:09.0 00:11:12.054 Attached to 0000:00:08.0 00:11:12.054 Initialization complete. 00:11:12.054 Time used:204441.984 (us). 00:11:12.054 00:11:12.054 real 0m0.281s 00:11:12.054 user 0m0.100s 00:11:12.054 sys 0m0.128s 00:11:12.054 20:57:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:12.054 20:57:32 -- common/autotest_common.sh@10 -- # set +x 00:11:12.054 ************************************ 00:11:12.054 END TEST nvme_startup 00:11:12.054 ************************************ 00:11:12.054 20:57:32 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:12.054 20:57:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.054 20:57:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.054 20:57:32 -- common/autotest_common.sh@10 -- # set +x 00:11:12.054 ************************************ 00:11:12.054 START TEST nvme_multi_secondary 00:11:12.054 ************************************ 00:11:12.054 20:57:32 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:11:12.054 20:57:32 -- nvme/nvme.sh@52 -- # pid0=64990 00:11:12.054 20:57:32 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:12.054 20:57:32 -- nvme/nvme.sh@54 -- # pid1=64991 00:11:12.054 20:57:32 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:12.054 20:57:32 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:15.385 Initializing NVMe Controllers 00:11:15.385 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:15.385 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:15.385 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:15.385 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:15.385 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:11:15.385 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:11:15.385 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:11:15.385 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:11:15.385 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:11:15.385 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:11:15.385 Initialization complete. Launching workers. 00:11:15.385 ======================================================== 00:11:15.385 Latency(us) 00:11:15.385 Device Information : IOPS MiB/s Average min max 00:11:15.385 PCIE (0000:00:06.0) NSID 1 from core 1: 5782.70 22.59 2765.23 1018.12 5550.77 00:11:15.385 PCIE (0000:00:07.0) NSID 1 from core 1: 5782.70 22.59 2766.44 1074.71 5838.23 00:11:15.385 PCIE (0000:00:09.0) NSID 1 from core 1: 5782.70 22.59 2766.38 1077.79 5577.65 00:11:15.385 PCIE (0000:00:08.0) NSID 1 from core 1: 5782.70 22.59 2766.36 1094.17 5538.88 00:11:15.385 PCIE (0000:00:08.0) NSID 2 from core 1: 5782.70 22.59 2766.25 1079.79 5394.40 00:11:15.385 PCIE (0000:00:08.0) NSID 3 from core 1: 5782.70 22.59 2766.20 1102.16 5263.25 00:11:15.385 ======================================================== 00:11:15.385 Total : 34696.18 135.53 2766.14 1018.12 5838.23 00:11:15.385 00:11:15.643 Initializing NVMe Controllers 00:11:15.643 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:15.643 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:15.643 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:15.643 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:15.643 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:11:15.643 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:11:15.643 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:11:15.643 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:11:15.643 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:11:15.643 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:11:15.643 Initialization complete. Launching workers. 00:11:15.643 ======================================================== 00:11:15.643 Latency(us) 00:11:15.643 Device Information : IOPS MiB/s Average min max 00:11:15.643 PCIE (0000:00:06.0) NSID 1 from core 2: 2478.42 9.68 6454.13 1920.40 13649.94 00:11:15.643 PCIE (0000:00:07.0) NSID 1 from core 2: 2478.42 9.68 6455.44 1944.31 12538.04 00:11:15.643 PCIE (0000:00:09.0) NSID 1 from core 2: 2478.42 9.68 6454.77 1972.72 12479.39 00:11:15.643 PCIE (0000:00:08.0) NSID 1 from core 2: 2478.42 9.68 6455.28 2082.07 13428.78 00:11:15.643 PCIE (0000:00:08.0) NSID 2 from core 2: 2478.42 9.68 6455.16 1747.89 12570.78 00:11:15.643 PCIE (0000:00:08.0) NSID 3 from core 2: 2478.42 9.68 6455.04 1536.15 13528.29 00:11:15.643 ======================================================== 00:11:15.643 Total : 14870.54 58.09 6454.97 1536.15 13649.94 00:11:15.643 00:11:15.643 20:57:36 -- nvme/nvme.sh@56 -- # wait 64990 00:11:17.544 Initializing NVMe Controllers 00:11:17.544 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:17.544 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:17.544 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:17.544 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:17.544 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:17.544 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:17.544 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:17.544 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:17.544 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:17.544 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:17.544 Initialization complete. Launching workers. 00:11:17.544 ======================================================== 00:11:17.544 Latency(us) 00:11:17.544 Device Information : IOPS MiB/s Average min max 00:11:17.544 PCIE (0000:00:06.0) NSID 1 from core 0: 8308.32 32.45 1924.33 986.59 6095.29 00:11:17.544 PCIE (0000:00:07.0) NSID 1 from core 0: 8308.32 32.45 1925.28 988.33 5759.79 00:11:17.544 PCIE (0000:00:09.0) NSID 1 from core 0: 8308.32 32.45 1925.22 945.96 5886.15 00:11:17.544 PCIE (0000:00:08.0) NSID 1 from core 0: 8308.32 32.45 1925.18 946.25 5842.82 00:11:17.544 PCIE (0000:00:08.0) NSID 2 from core 0: 8308.32 32.45 1925.11 877.08 5667.21 00:11:17.544 PCIE (0000:00:08.0) NSID 3 from core 0: 8308.32 32.45 1925.03 818.78 5649.47 00:11:17.544 ======================================================== 00:11:17.544 Total : 49849.93 194.73 1925.02 818.78 6095.29 00:11:17.544 00:11:17.544 20:57:38 -- nvme/nvme.sh@57 -- # wait 64991 00:11:17.544 20:57:38 -- nvme/nvme.sh@61 -- # pid0=65069 00:11:17.544 20:57:38 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:17.544 20:57:38 -- nvme/nvme.sh@63 -- # pid1=65070 00:11:17.544 20:57:38 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:17.544 20:57:38 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:21.734 Initializing NVMe Controllers 00:11:21.734 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:21.734 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:21.734 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:21.734 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:21.734 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:21.734 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:21.734 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:21.734 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:21.734 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:21.734 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:21.734 Initialization complete. Launching workers. 00:11:21.734 ======================================================== 00:11:21.734 Latency(us) 00:11:21.734 Device Information : IOPS MiB/s Average min max 00:11:21.734 PCIE (0000:00:06.0) NSID 1 from core 0: 5164.63 20.17 3096.29 1054.92 7082.13 00:11:21.734 PCIE (0000:00:07.0) NSID 1 from core 0: 5164.63 20.17 3097.61 1071.21 7595.77 00:11:21.734 PCIE (0000:00:09.0) NSID 1 from core 0: 5164.63 20.17 3097.69 1018.93 6866.97 00:11:21.734 PCIE (0000:00:08.0) NSID 1 from core 0: 5164.63 20.17 3097.65 1028.96 6766.80 00:11:21.734 PCIE (0000:00:08.0) NSID 2 from core 0: 5164.63 20.17 3097.75 1063.57 6470.16 00:11:21.734 PCIE (0000:00:08.0) NSID 3 from core 0: 5164.63 20.17 3097.73 1056.88 6562.61 00:11:21.734 ======================================================== 00:11:21.734 Total : 30987.78 121.05 3097.45 1018.93 7595.77 00:11:21.734 00:11:21.734 Initializing NVMe Controllers 00:11:21.734 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:21.734 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:21.734 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:21.734 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:21.734 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:11:21.734 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:11:21.734 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:11:21.734 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:11:21.734 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:11:21.734 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:11:21.734 Initialization complete. Launching workers. 00:11:21.734 ======================================================== 00:11:21.734 Latency(us) 00:11:21.734 Device Information : IOPS MiB/s Average min max 00:11:21.734 PCIE (0000:00:06.0) NSID 1 from core 1: 5070.61 19.81 3153.55 1026.83 6478.94 00:11:21.734 PCIE (0000:00:07.0) NSID 1 from core 1: 5070.61 19.81 3154.73 1050.85 6490.87 00:11:21.734 PCIE (0000:00:09.0) NSID 1 from core 1: 5070.61 19.81 3154.78 1033.30 6684.28 00:11:21.734 PCIE (0000:00:08.0) NSID 1 from core 1: 5070.61 19.81 3154.68 1035.51 7180.39 00:11:21.734 PCIE (0000:00:08.0) NSID 2 from core 1: 5070.61 19.81 3154.67 1062.39 7455.77 00:11:21.734 PCIE (0000:00:08.0) NSID 3 from core 1: 5070.61 19.81 3154.63 1072.94 7940.35 00:11:21.734 ======================================================== 00:11:21.734 Total : 30423.63 118.84 3154.51 1026.83 7940.35 00:11:21.734 00:11:23.113 Initializing NVMe Controllers 00:11:23.113 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:23.113 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:23.113 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:23.113 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:23.113 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:11:23.113 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:11:23.113 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:11:23.113 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:11:23.113 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:11:23.113 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:11:23.113 Initialization complete. Launching workers. 00:11:23.113 ======================================================== 00:11:23.113 Latency(us) 00:11:23.113 Device Information : IOPS MiB/s Average min max 00:11:23.113 PCIE (0000:00:06.0) NSID 1 from core 2: 3617.94 14.13 4420.38 1005.71 13821.90 00:11:23.113 PCIE (0000:00:07.0) NSID 1 from core 2: 3617.94 14.13 4421.78 1029.40 14313.36 00:11:23.113 PCIE (0000:00:09.0) NSID 1 from core 2: 3617.94 14.13 4421.72 980.72 14762.22 00:11:23.113 PCIE (0000:00:08.0) NSID 1 from core 2: 3617.94 14.13 4421.94 975.14 13240.09 00:11:23.113 PCIE (0000:00:08.0) NSID 2 from core 2: 3617.94 14.13 4421.87 862.08 13522.44 00:11:23.113 PCIE (0000:00:08.0) NSID 3 from core 2: 3617.94 14.13 4421.58 816.95 16215.34 00:11:23.113 ======================================================== 00:11:23.113 Total : 21707.64 84.80 4421.55 816.95 16215.34 00:11:23.113 00:11:23.113 20:57:43 -- nvme/nvme.sh@65 -- # wait 65069 00:11:23.113 20:57:43 -- nvme/nvme.sh@66 -- # wait 65070 00:11:23.113 00:11:23.113 real 0m11.079s 00:11:23.113 user 0m19.045s 00:11:23.113 sys 0m0.857s 00:11:23.113 20:57:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.113 ************************************ 00:11:23.113 END TEST nvme_multi_secondary 00:11:23.113 ************************************ 00:11:23.113 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:11:23.113 20:57:44 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:23.113 20:57:44 -- nvme/nvme.sh@102 -- # kill_stub 00:11:23.113 20:57:44 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63983 ]] 00:11:23.113 20:57:44 -- common/autotest_common.sh@1076 -- # kill 63983 00:11:23.113 20:57:44 -- common/autotest_common.sh@1077 -- # wait 63983 00:11:24.050 [2024-12-08 20:57:45.023365] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:24.050 [2024-12-08 20:57:45.023474] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:24.050 [2024-12-08 20:57:45.023508] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:24.050 [2024-12-08 20:57:45.023533] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:24.618 [2024-12-08 20:57:45.554031] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:24.618 [2024-12-08 20:57:45.554148] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:24.618 [2024-12-08 20:57:45.554186] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:24.618 [2024-12-08 20:57:45.554219] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:25.606 [2024-12-08 20:57:46.552137] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:25.606 [2024-12-08 20:57:46.552226] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:25.606 [2024-12-08 20:57:46.552257] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:25.606 [2024-12-08 20:57:46.552282] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:27.512 [2024-12-08 20:57:48.063203] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:27.512 [2024-12-08 20:57:48.063291] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:27.512 [2024-12-08 20:57:48.063322] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:27.512 [2024-12-08 20:57:48.063351] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:11:27.512 20:57:48 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:11:27.512 20:57:48 -- common/autotest_common.sh@1083 -- # echo 2 00:11:27.513 20:57:48 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:27.513 20:57:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:27.513 20:57:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:27.513 20:57:48 -- common/autotest_common.sh@10 -- # set +x 00:11:27.513 ************************************ 00:11:27.513 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:27.513 ************************************ 00:11:27.513 20:57:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:27.513 * Looking for test storage... 00:11:27.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:27.513 20:57:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:27.513 20:57:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:27.513 20:57:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:27.513 20:57:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:27.513 20:57:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:27.513 20:57:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:27.513 20:57:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:27.513 20:57:48 -- scripts/common.sh@335 -- # IFS=.-: 00:11:27.513 20:57:48 -- scripts/common.sh@335 -- # read -ra ver1 00:11:27.513 20:57:48 -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.513 20:57:48 -- scripts/common.sh@336 -- # read -ra ver2 00:11:27.513 20:57:48 -- scripts/common.sh@337 -- # local 'op=<' 00:11:27.513 20:57:48 -- scripts/common.sh@339 -- # ver1_l=2 00:11:27.513 20:57:48 -- scripts/common.sh@340 -- # ver2_l=1 00:11:27.513 20:57:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:27.513 20:57:48 -- scripts/common.sh@343 -- # case "$op" in 00:11:27.513 20:57:48 -- scripts/common.sh@344 -- # : 1 00:11:27.513 20:57:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:27.513 20:57:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.513 20:57:48 -- scripts/common.sh@364 -- # decimal 1 00:11:27.513 20:57:48 -- scripts/common.sh@352 -- # local d=1 00:11:27.513 20:57:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.513 20:57:48 -- scripts/common.sh@354 -- # echo 1 00:11:27.513 20:57:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:27.513 20:57:48 -- scripts/common.sh@365 -- # decimal 2 00:11:27.513 20:57:48 -- scripts/common.sh@352 -- # local d=2 00:11:27.513 20:57:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.513 20:57:48 -- scripts/common.sh@354 -- # echo 2 00:11:27.513 20:57:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:27.513 20:57:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:27.513 20:57:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:27.513 20:57:48 -- scripts/common.sh@367 -- # return 0 00:11:27.513 20:57:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.513 20:57:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:27.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.513 --rc genhtml_branch_coverage=1 00:11:27.513 --rc genhtml_function_coverage=1 00:11:27.513 --rc genhtml_legend=1 00:11:27.513 --rc geninfo_all_blocks=1 00:11:27.513 --rc geninfo_unexecuted_blocks=1 00:11:27.513 00:11:27.513 ' 00:11:27.513 20:57:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:27.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.513 --rc genhtml_branch_coverage=1 00:11:27.513 --rc genhtml_function_coverage=1 00:11:27.513 --rc genhtml_legend=1 00:11:27.513 --rc geninfo_all_blocks=1 00:11:27.513 --rc geninfo_unexecuted_blocks=1 00:11:27.513 00:11:27.513 ' 00:11:27.513 20:57:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:27.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.513 --rc genhtml_branch_coverage=1 00:11:27.513 --rc genhtml_function_coverage=1 00:11:27.513 --rc genhtml_legend=1 00:11:27.513 --rc geninfo_all_blocks=1 00:11:27.513 --rc geninfo_unexecuted_blocks=1 00:11:27.513 00:11:27.513 ' 00:11:27.513 20:57:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:27.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.513 --rc genhtml_branch_coverage=1 00:11:27.513 --rc genhtml_function_coverage=1 00:11:27.513 --rc genhtml_legend=1 00:11:27.513 --rc geninfo_all_blocks=1 00:11:27.513 --rc geninfo_unexecuted_blocks=1 00:11:27.513 00:11:27.513 ' 00:11:27.513 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:27.513 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:27.513 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:27.513 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:27.513 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:27.513 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:27.513 20:57:48 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:27.513 20:57:48 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:27.513 20:57:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:27.513 20:57:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:27.513 20:57:48 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:27.513 20:57:48 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:27.513 20:57:48 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:27.513 20:57:48 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:27.513 20:57:48 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:27.772 20:57:48 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:27.772 20:57:48 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:27.772 20:57:48 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:27.772 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:11:27.772 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:11:27.772 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65267 00:11:27.772 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:27.772 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:27.772 20:57:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65267 00:11:27.772 20:57:48 -- common/autotest_common.sh@829 -- # '[' -z 65267 ']' 00:11:27.772 20:57:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.772 20:57:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.772 20:57:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.772 20:57:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.772 20:57:48 -- common/autotest_common.sh@10 -- # set +x 00:11:27.772 [2024-12-08 20:57:48.717328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:27.772 [2024-12-08 20:57:48.717500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65267 ] 00:11:28.031 [2024-12-08 20:57:48.912740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.290 [2024-12-08 20:57:49.141003] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:28.290 [2024-12-08 20:57:49.141418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.290 [2024-12-08 20:57:49.141684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.290 [2024-12-08 20:57:49.141851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.290 [2024-12-08 20:57:49.141863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.667 20:57:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.667 20:57:50 -- common/autotest_common.sh@862 -- # return 0 00:11:29.667 20:57:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:11:29.667 20:57:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.667 20:57:50 -- common/autotest_common.sh@10 -- # set +x 00:11:29.667 nvme0n1 00:11:29.667 20:57:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.667 20:57:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:29.667 20:57:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_8g4qJ.txt 00:11:29.667 20:57:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:29.667 20:57:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.667 20:57:50 -- common/autotest_common.sh@10 -- # set +x 00:11:29.667 true 00:11:29.667 20:57:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.667 20:57:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:29.667 20:57:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733691470 00:11:29.667 20:57:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65298 00:11:29.667 20:57:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:29.667 20:57:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:29.667 20:57:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:31.568 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:31.568 20:57:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.568 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:11:31.568 [2024-12-08 20:57:52.517543] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:31.568 [2024-12-08 20:57:52.518107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:31.568 [2024-12-08 20:57:52.518151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:31.568 [2024-12-08 20:57:52.518172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.568 [2024-12-08 20:57:52.520322] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:31.568 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65298 00:11:31.568 20:57:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.568 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65298 00:11:31.568 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65298 00:11:31.568 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:31.568 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:31.568 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:31.568 20:57:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.568 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:11:31.568 20:57:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.568 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:31.568 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_8g4qJ.txt 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_8g4qJ.txt 00:11:31.826 20:57:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65267 00:11:31.826 20:57:52 -- common/autotest_common.sh@936 -- # '[' -z 65267 ']' 00:11:31.826 20:57:52 -- common/autotest_common.sh@940 -- # kill -0 65267 00:11:31.826 20:57:52 -- common/autotest_common.sh@941 -- # uname 00:11:31.826 20:57:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:31.826 20:57:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65267 00:11:31.826 killing process with pid 65267 00:11:31.826 20:57:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:31.826 20:57:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:31.826 20:57:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65267' 00:11:31.826 20:57:52 -- common/autotest_common.sh@955 -- # kill 65267 00:11:31.826 20:57:52 -- common/autotest_common.sh@960 -- # wait 65267 00:11:33.728 20:57:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:33.728 20:57:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:33.728 00:11:33.728 real 0m6.091s 00:11:33.728 user 0m21.387s 00:11:33.728 sys 0m0.653s 00:11:33.729 20:57:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.729 ************************************ 00:11:33.729 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:33.729 ************************************ 00:11:33.729 20:57:54 -- common/autotest_common.sh@10 -- # set +x 00:11:33.729 20:57:54 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:33.729 20:57:54 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:33.729 20:57:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:33.729 20:57:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.729 20:57:54 -- common/autotest_common.sh@10 -- # set +x 00:11:33.729 ************************************ 00:11:33.729 START TEST nvme_fio 00:11:33.729 ************************************ 00:11:33.729 20:57:54 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:11:33.729 20:57:54 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:33.729 20:57:54 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:33.729 20:57:54 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:33.729 20:57:54 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:33.729 20:57:54 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:33.729 20:57:54 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:33.729 20:57:54 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:33.729 20:57:54 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:33.729 20:57:54 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:33.729 20:57:54 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:33.729 20:57:54 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:11:33.729 20:57:54 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:33.729 20:57:54 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:33.729 20:57:54 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:33.729 20:57:54 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:33.988 20:57:54 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:33.988 20:57:54 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:34.247 20:57:55 -- nvme/nvme.sh@41 -- # bs=4096 00:11:34.247 20:57:55 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:34.247 20:57:55 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:34.247 20:57:55 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:34.247 20:57:55 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:34.247 20:57:55 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:34.247 20:57:55 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:34.247 20:57:55 -- common/autotest_common.sh@1330 -- # shift 00:11:34.247 20:57:55 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:34.247 20:57:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:34.247 20:57:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:34.247 20:57:55 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:34.247 20:57:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:34.247 20:57:55 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:34.247 20:57:55 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:34.247 20:57:55 -- common/autotest_common.sh@1336 -- # break 00:11:34.247 20:57:55 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:34.247 20:57:55 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:34.247 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:34.247 fio-3.35 00:11:34.247 Starting 1 thread 00:11:37.535 00:11:37.535 test: (groupid=0, jobs=1): err= 0: pid=65446: Sun Dec 8 20:57:58 2024 00:11:37.535 read: IOPS=13.2k, BW=51.5MiB/s (54.0MB/s)(103MiB/2001msec) 00:11:37.535 slat (nsec): min=3882, max=70470, avg=6448.99, stdev=4422.16 00:11:37.535 clat (usec): min=201, max=9482, avg=4828.87, stdev=400.68 00:11:37.535 lat (usec): min=206, max=9553, avg=4835.32, stdev=401.08 00:11:37.535 clat percentiles (usec): 00:11:37.535 | 1.00th=[ 3785], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4555], 00:11:37.535 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:11:37.535 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:11:37.535 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6915], 99.95th=[ 8291], 00:11:37.535 | 99.99th=[ 9372] 00:11:37.535 bw ( KiB/s): min=50312, max=54400, per=99.71%, avg=52600.00, stdev=2087.23, samples=3 00:11:37.535 iops : min=12578, max=13600, avg=13150.00, stdev=521.81, samples=3 00:11:37.535 write: IOPS=13.2k, BW=51.5MiB/s (54.0MB/s)(103MiB/2001msec); 0 zone resets 00:11:37.535 slat (nsec): min=4004, max=66382, avg=6611.40, stdev=4496.27 00:11:37.535 clat (usec): min=306, max=9336, avg=4841.47, stdev=394.17 00:11:37.535 lat (usec): min=312, max=9348, avg=4848.08, stdev=394.55 00:11:37.535 clat percentiles (usec): 00:11:37.535 | 1.00th=[ 3818], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:11:37.535 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:11:37.535 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5473], 00:11:37.535 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 7046], 99.95th=[ 8225], 00:11:37.535 | 99.99th=[ 9110] 00:11:37.535 bw ( KiB/s): min=50688, max=54200, per=99.87%, avg=52677.33, stdev=1801.91, samples=3 00:11:37.535 iops : min=12672, max=13550, avg=13169.33, stdev=450.48, samples=3 00:11:37.535 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:37.535 lat (msec) : 2=0.05%, 4=1.94%, 10=97.97% 00:11:37.535 cpu : usr=98.75%, sys=0.00%, ctx=5, majf=0, minf=609 00:11:37.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:37.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:37.535 issued rwts: total=26390,26386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:37.535 00:11:37.535 Run status group 0 (all jobs): 00:11:37.535 READ: bw=51.5MiB/s (54.0MB/s), 51.5MiB/s-51.5MiB/s (54.0MB/s-54.0MB/s), io=103MiB (108MB), run=2001-2001msec 00:11:37.535 WRITE: bw=51.5MiB/s (54.0MB/s), 51.5MiB/s-51.5MiB/s (54.0MB/s-54.0MB/s), io=103MiB (108MB), run=2001-2001msec 00:11:37.535 ----------------------------------------------------- 00:11:37.536 Suppressions used: 00:11:37.536 count bytes template 00:11:37.536 1 32 /usr/src/fio/parse.c 00:11:37.536 1 8 libtcmalloc_minimal.so 00:11:37.536 ----------------------------------------------------- 00:11:37.536 00:11:37.536 20:57:58 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:37.536 20:57:58 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:37.536 20:57:58 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:37.536 20:57:58 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:37.795 20:57:58 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:37.795 20:57:58 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:38.054 20:57:58 -- nvme/nvme.sh@41 -- # bs=4096 00:11:38.054 20:57:58 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:38.054 20:57:58 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:38.054 20:57:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:38.054 20:57:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:38.054 20:57:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:38.054 20:57:58 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:38.054 20:57:58 -- common/autotest_common.sh@1330 -- # shift 00:11:38.054 20:57:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:38.054 20:57:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:38.054 20:57:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:38.054 20:57:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:38.054 20:57:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:38.054 20:57:58 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:38.054 20:57:58 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:38.054 20:57:58 -- common/autotest_common.sh@1336 -- # break 00:11:38.054 20:57:58 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:38.054 20:57:58 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:38.312 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:38.312 fio-3.35 00:11:38.312 Starting 1 thread 00:11:41.603 00:11:41.603 test: (groupid=0, jobs=1): err= 0: pid=65508: Sun Dec 8 20:58:02 2024 00:11:41.603 read: IOPS=13.4k, BW=52.3MiB/s (54.8MB/s)(105MiB/2001msec) 00:11:41.603 slat (nsec): min=4007, max=61634, avg=6548.44, stdev=4161.32 00:11:41.603 clat (usec): min=304, max=8758, avg=4759.37, stdev=411.37 00:11:41.603 lat (usec): min=310, max=8819, avg=4765.91, stdev=411.78 00:11:41.603 clat percentiles (usec): 00:11:41.603 | 1.00th=[ 3752], 5.00th=[ 4178], 10.00th=[ 4359], 20.00th=[ 4490], 00:11:41.603 | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4817], 00:11:41.603 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5211], 95.00th=[ 5407], 00:11:41.603 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 6783], 99.95th=[ 7767], 00:11:41.603 | 99.99th=[ 8717] 00:11:41.603 bw ( KiB/s): min=51072, max=54752, per=99.44%, avg=53248.00, stdev=1929.84, samples=3 00:11:41.603 iops : min=12768, max=13688, avg=13312.00, stdev=482.46, samples=3 00:11:41.603 write: IOPS=13.4k, BW=52.2MiB/s (54.8MB/s)(105MiB/2001msec); 0 zone resets 00:11:41.603 slat (usec): min=4, max=185, avg= 6.66, stdev= 4.24 00:11:41.603 clat (usec): min=260, max=8679, avg=4772.17, stdev=412.02 00:11:41.603 lat (usec): min=266, max=8691, avg=4778.83, stdev=412.35 00:11:41.603 clat percentiles (usec): 00:11:41.603 | 1.00th=[ 3785], 5.00th=[ 4178], 10.00th=[ 4359], 20.00th=[ 4490], 00:11:41.603 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4817], 00:11:41.603 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5276], 95.00th=[ 5407], 00:11:41.603 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 6849], 99.95th=[ 7701], 00:11:41.603 | 99.99th=[ 8586] 00:11:41.603 bw ( KiB/s): min=51384, max=54552, per=99.71%, avg=53336.00, stdev=1707.43, samples=3 00:11:41.603 iops : min=12846, max=13638, avg=13334.00, stdev=426.86, samples=3 00:11:41.603 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:41.603 lat (msec) : 2=0.05%, 4=2.84%, 10=97.07% 00:11:41.603 cpu : usr=97.95%, sys=0.40%, ctx=31, majf=0, minf=608 00:11:41.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:41.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.603 issued rwts: total=26787,26758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.603 00:11:41.603 Run status group 0 (all jobs): 00:11:41.603 READ: bw=52.3MiB/s (54.8MB/s), 52.3MiB/s-52.3MiB/s (54.8MB/s-54.8MB/s), io=105MiB (110MB), run=2001-2001msec 00:11:41.603 WRITE: bw=52.2MiB/s (54.8MB/s), 52.2MiB/s-52.2MiB/s (54.8MB/s-54.8MB/s), io=105MiB (110MB), run=2001-2001msec 00:11:41.603 ----------------------------------------------------- 00:11:41.603 Suppressions used: 00:11:41.603 count bytes template 00:11:41.603 1 32 /usr/src/fio/parse.c 00:11:41.603 1 8 libtcmalloc_minimal.so 00:11:41.603 ----------------------------------------------------- 00:11:41.603 00:11:41.603 20:58:02 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:41.603 20:58:02 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:41.603 20:58:02 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:41.603 20:58:02 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:41.603 20:58:02 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:41.603 20:58:02 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:41.862 20:58:02 -- nvme/nvme.sh@41 -- # bs=4096 00:11:41.862 20:58:02 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:41.862 20:58:02 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:41.862 20:58:02 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:41.862 20:58:02 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:41.863 20:58:02 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:41.863 20:58:02 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:41.863 20:58:02 -- common/autotest_common.sh@1330 -- # shift 00:11:41.863 20:58:02 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:41.863 20:58:02 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:41.863 20:58:02 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:41.863 20:58:02 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:41.863 20:58:02 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:41.863 20:58:02 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:41.863 20:58:02 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:41.863 20:58:02 -- common/autotest_common.sh@1336 -- # break 00:11:41.863 20:58:02 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:41.863 20:58:02 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:42.122 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:42.122 fio-3.35 00:11:42.122 Starting 1 thread 00:11:45.465 00:11:45.465 test: (groupid=0, jobs=1): err= 0: pid=65576: Sun Dec 8 20:58:06 2024 00:11:45.465 read: IOPS=13.5k, BW=52.8MiB/s (55.3MB/s)(106MiB/2001msec) 00:11:45.466 slat (usec): min=4, max=102, avg= 6.15, stdev= 3.92 00:11:45.466 clat (usec): min=298, max=9642, avg=4714.86, stdev=419.39 00:11:45.466 lat (usec): min=302, max=9745, avg=4721.00, stdev=419.75 00:11:45.466 clat percentiles (usec): 00:11:45.466 | 1.00th=[ 3720], 5.00th=[ 4047], 10.00th=[ 4228], 20.00th=[ 4424], 00:11:45.466 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 4817], 00:11:45.466 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5342], 00:11:45.466 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 7504], 99.95th=[ 8356], 00:11:45.466 | 99.99th=[ 9503] 00:11:45.466 bw ( KiB/s): min=51320, max=54896, per=98.63%, avg=53309.33, stdev=1821.69, samples=3 00:11:45.466 iops : min=12830, max=13724, avg=13327.33, stdev=455.42, samples=3 00:11:45.466 write: IOPS=13.5k, BW=52.7MiB/s (55.3MB/s)(106MiB/2001msec); 0 zone resets 00:11:45.466 slat (nsec): min=4083, max=52235, avg=6335.50, stdev=3991.87 00:11:45.466 clat (usec): min=315, max=9424, avg=4724.79, stdev=424.20 00:11:45.466 lat (usec): min=321, max=9436, avg=4731.12, stdev=424.54 00:11:45.466 clat percentiles (usec): 00:11:45.466 | 1.00th=[ 3720], 5.00th=[ 4047], 10.00th=[ 4228], 20.00th=[ 4424], 00:11:45.466 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 4817], 00:11:45.466 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5342], 00:11:45.466 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 7504], 99.95th=[ 8225], 00:11:45.466 | 99.99th=[ 9241] 00:11:45.466 bw ( KiB/s): min=51552, max=54768, per=98.84%, avg=53381.33, stdev=1653.07, samples=3 00:11:45.466 iops : min=12888, max=13692, avg=13345.33, stdev=413.27, samples=3 00:11:45.466 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:45.466 lat (msec) : 2=0.05%, 4=3.85%, 10=96.06% 00:11:45.466 cpu : usr=98.55%, sys=0.20%, ctx=3, majf=0, minf=609 00:11:45.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:45.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.466 issued rwts: total=27039,27016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.466 00:11:45.466 Run status group 0 (all jobs): 00:11:45.466 READ: bw=52.8MiB/s (55.3MB/s), 52.8MiB/s-52.8MiB/s (55.3MB/s-55.3MB/s), io=106MiB (111MB), run=2001-2001msec 00:11:45.466 WRITE: bw=52.7MiB/s (55.3MB/s), 52.7MiB/s-52.7MiB/s (55.3MB/s-55.3MB/s), io=106MiB (111MB), run=2001-2001msec 00:11:45.466 ----------------------------------------------------- 00:11:45.466 Suppressions used: 00:11:45.466 count bytes template 00:11:45.466 1 32 /usr/src/fio/parse.c 00:11:45.466 1 8 libtcmalloc_minimal.so 00:11:45.466 ----------------------------------------------------- 00:11:45.466 00:11:45.466 20:58:06 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:45.466 20:58:06 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:45.466 20:58:06 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:45.466 20:58:06 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:45.724 20:58:06 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:45.724 20:58:06 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:45.982 20:58:06 -- nvme/nvme.sh@41 -- # bs=4096 00:11:45.982 20:58:06 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:45.982 20:58:06 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:45.982 20:58:06 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:45.983 20:58:06 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:45.983 20:58:06 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:45.983 20:58:06 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:45.983 20:58:06 -- common/autotest_common.sh@1330 -- # shift 00:11:45.983 20:58:06 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:45.983 20:58:06 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:45.983 20:58:06 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:45.983 20:58:06 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:45.983 20:58:06 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:45.983 20:58:06 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:45.983 20:58:06 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:45.983 20:58:06 -- common/autotest_common.sh@1336 -- # break 00:11:45.983 20:58:06 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:45.983 20:58:06 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:46.241 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:46.241 fio-3.35 00:11:46.241 Starting 1 thread 00:11:50.437 00:11:50.437 test: (groupid=0, jobs=1): err= 0: pid=65641: Sun Dec 8 20:58:10 2024 00:11:50.437 read: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(101MiB/2001msec) 00:11:50.437 slat (nsec): min=4039, max=62757, avg=6722.93, stdev=4087.00 00:11:50.437 clat (usec): min=311, max=8914, avg=4916.30, stdev=506.40 00:11:50.437 lat (usec): min=316, max=8977, avg=4923.02, stdev=507.06 00:11:50.437 clat percentiles (usec): 00:11:50.437 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4555], 00:11:50.437 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:11:50.437 | 70.00th=[ 5080], 80.00th=[ 5342], 90.00th=[ 5669], 95.00th=[ 5866], 00:11:50.437 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 7308], 99.95th=[ 7832], 00:11:50.437 | 99.99th=[ 8848] 00:11:50.437 bw ( KiB/s): min=48072, max=53888, per=97.95%, avg=50769.00, stdev=2930.87, samples=3 00:11:50.437 iops : min=12016, max=13472, avg=12691.33, stdev=733.69, samples=3 00:11:50.437 write: IOPS=12.9k, BW=50.6MiB/s (53.0MB/s)(101MiB/2001msec); 0 zone resets 00:11:50.437 slat (nsec): min=4197, max=57267, avg=6939.09, stdev=4190.46 00:11:50.437 clat (usec): min=265, max=8766, avg=4931.37, stdev=509.79 00:11:50.437 lat (usec): min=270, max=8778, avg=4938.31, stdev=510.39 00:11:50.437 clat percentiles (usec): 00:11:50.437 | 1.00th=[ 3949], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:11:50.437 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:11:50.437 | 70.00th=[ 5080], 80.00th=[ 5342], 90.00th=[ 5735], 95.00th=[ 5866], 00:11:50.437 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 7308], 99.95th=[ 7701], 00:11:50.437 | 99.99th=[ 8586] 00:11:50.437 bw ( KiB/s): min=48088, max=53592, per=98.14%, avg=50808.67, stdev=2752.54, samples=3 00:11:50.437 iops : min=12022, max=13398, avg=12702.00, stdev=688.14, samples=3 00:11:50.437 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:11:50.437 lat (msec) : 2=0.05%, 4=1.17%, 10=98.74% 00:11:50.437 cpu : usr=98.70%, sys=0.00%, ctx=4, majf=0, minf=606 00:11:50.437 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:50.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.437 issued rwts: total=25928,25899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.437 00:11:50.437 Run status group 0 (all jobs): 00:11:50.437 READ: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=101MiB (106MB), run=2001-2001msec 00:11:50.437 WRITE: bw=50.6MiB/s (53.0MB/s), 50.6MiB/s-50.6MiB/s (53.0MB/s-53.0MB/s), io=101MiB (106MB), run=2001-2001msec 00:11:50.437 ----------------------------------------------------- 00:11:50.437 Suppressions used: 00:11:50.437 count bytes template 00:11:50.437 1 32 /usr/src/fio/parse.c 00:11:50.437 1 8 libtcmalloc_minimal.so 00:11:50.437 ----------------------------------------------------- 00:11:50.437 00:11:50.437 20:58:10 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:50.437 20:58:10 -- nvme/nvme.sh@46 -- # true 00:11:50.437 00:11:50.437 real 0m16.391s 00:11:50.437 user 0m13.102s 00:11:50.437 sys 0m1.926s 00:11:50.437 20:58:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:50.437 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:11:50.437 ************************************ 00:11:50.437 END TEST nvme_fio 00:11:50.437 ************************************ 00:11:50.437 00:11:50.437 real 1m35.049s 00:11:50.437 user 3m47.862s 00:11:50.437 sys 0m13.953s 00:11:50.437 20:58:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:50.437 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:11:50.437 ************************************ 00:11:50.437 END TEST nvme 00:11:50.437 ************************************ 00:11:50.437 20:58:10 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:11:50.437 20:58:10 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:50.437 20:58:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:50.437 20:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:50.437 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:11:50.437 ************************************ 00:11:50.437 START TEST nvme_scc 00:11:50.437 ************************************ 00:11:50.437 20:58:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:50.437 * Looking for test storage... 00:11:50.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:50.437 20:58:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:50.437 20:58:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:50.437 20:58:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:50.437 20:58:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:50.437 20:58:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:50.437 20:58:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:50.437 20:58:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:50.437 20:58:11 -- scripts/common.sh@335 -- # IFS=.-: 00:11:50.437 20:58:11 -- scripts/common.sh@335 -- # read -ra ver1 00:11:50.437 20:58:11 -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.437 20:58:11 -- scripts/common.sh@336 -- # read -ra ver2 00:11:50.437 20:58:11 -- scripts/common.sh@337 -- # local 'op=<' 00:11:50.437 20:58:11 -- scripts/common.sh@339 -- # ver1_l=2 00:11:50.437 20:58:11 -- scripts/common.sh@340 -- # ver2_l=1 00:11:50.437 20:58:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:50.437 20:58:11 -- scripts/common.sh@343 -- # case "$op" in 00:11:50.437 20:58:11 -- scripts/common.sh@344 -- # : 1 00:11:50.437 20:58:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:50.437 20:58:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.437 20:58:11 -- scripts/common.sh@364 -- # decimal 1 00:11:50.437 20:58:11 -- scripts/common.sh@352 -- # local d=1 00:11:50.437 20:58:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.437 20:58:11 -- scripts/common.sh@354 -- # echo 1 00:11:50.437 20:58:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:50.437 20:58:11 -- scripts/common.sh@365 -- # decimal 2 00:11:50.437 20:58:11 -- scripts/common.sh@352 -- # local d=2 00:11:50.438 20:58:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.438 20:58:11 -- scripts/common.sh@354 -- # echo 2 00:11:50.438 20:58:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:50.438 20:58:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:50.438 20:58:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:50.438 20:58:11 -- scripts/common.sh@367 -- # return 0 00:11:50.438 20:58:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.438 20:58:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:50.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.438 --rc genhtml_branch_coverage=1 00:11:50.438 --rc genhtml_function_coverage=1 00:11:50.438 --rc genhtml_legend=1 00:11:50.438 --rc geninfo_all_blocks=1 00:11:50.438 --rc geninfo_unexecuted_blocks=1 00:11:50.438 00:11:50.438 ' 00:11:50.438 20:58:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:50.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.438 --rc genhtml_branch_coverage=1 00:11:50.438 --rc genhtml_function_coverage=1 00:11:50.438 --rc genhtml_legend=1 00:11:50.438 --rc geninfo_all_blocks=1 00:11:50.438 --rc geninfo_unexecuted_blocks=1 00:11:50.438 00:11:50.438 ' 00:11:50.438 20:58:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:50.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.438 --rc genhtml_branch_coverage=1 00:11:50.438 --rc genhtml_function_coverage=1 00:11:50.438 --rc genhtml_legend=1 00:11:50.438 --rc geninfo_all_blocks=1 00:11:50.438 --rc geninfo_unexecuted_blocks=1 00:11:50.438 00:11:50.438 ' 00:11:50.438 20:58:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:50.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.438 --rc genhtml_branch_coverage=1 00:11:50.438 --rc genhtml_function_coverage=1 00:11:50.438 --rc genhtml_legend=1 00:11:50.438 --rc geninfo_all_blocks=1 00:11:50.438 --rc geninfo_unexecuted_blocks=1 00:11:50.438 00:11:50.438 ' 00:11:50.438 20:58:11 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:50.438 20:58:11 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:50.438 20:58:11 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:50.438 20:58:11 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:50.438 20:58:11 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:50.438 20:58:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.438 20:58:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.438 20:58:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.438 20:58:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.438 20:58:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.438 20:58:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.438 20:58:11 -- paths/export.sh@5 -- # export PATH 00:11:50.438 20:58:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.438 20:58:11 -- nvme/functions.sh@10 -- # ctrls=() 00:11:50.438 20:58:11 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:50.438 20:58:11 -- nvme/functions.sh@11 -- # nvmes=() 00:11:50.438 20:58:11 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:50.438 20:58:11 -- nvme/functions.sh@12 -- # bdfs=() 00:11:50.438 20:58:11 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:50.438 20:58:11 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:50.438 20:58:11 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:50.438 20:58:11 -- nvme/functions.sh@14 -- # nvme_name= 00:11:50.438 20:58:11 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:50.438 20:58:11 -- nvme/nvme_scc.sh@12 -- # uname 00:11:50.438 20:58:11 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:50.438 20:58:11 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:50.438 20:58:11 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:50.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:50.957 Waiting for block devices as requested 00:11:50.957 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.957 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.957 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:51.216 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:56.498 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:56.498 20:58:17 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:56.498 20:58:17 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:56.498 20:58:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:56.498 20:58:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:56.498 20:58:17 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:56.498 20:58:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:56.498 20:58:17 -- scripts/common.sh@15 -- # local i 00:11:56.498 20:58:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:56.498 20:58:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:56.498 20:58:17 -- scripts/common.sh@24 -- # return 0 00:11:56.498 20:58:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:56.498 20:58:17 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:56.498 20:58:17 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:56.498 20:58:17 -- nvme/functions.sh@18 -- # shift 00:11:56.498 20:58:17 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.498 20:58:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.498 20:58:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.498 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:56.498 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:56.498 20:58:17 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.498 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:56.498 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:56.498 20:58:17 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.498 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:56.498 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:56.498 20:58:17 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.498 20:58:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:56.498 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:56.498 20:58:17 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:56.498 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:56.499 20:58:17 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.499 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.499 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.500 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:56.500 20:58:17 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:56.500 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:56.501 20:58:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:56.501 20:58:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:56.501 20:58:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:56.501 20:58:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:56.501 20:58:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:56.501 20:58:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:56.501 20:58:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:56.501 20:58:17 -- scripts/common.sh@15 -- # local i 00:11:56.501 20:58:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:56.501 20:58:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:56.501 20:58:17 -- scripts/common.sh@24 -- # return 0 00:11:56.501 20:58:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:56.501 20:58:17 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:56.501 20:58:17 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@18 -- # shift 00:11:56.501 20:58:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:56.501 20:58:17 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.501 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.501 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.502 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.502 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:56.502 20:58:17 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.503 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:56.503 20:58:17 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:56.503 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:56.504 20:58:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:56.504 20:58:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:56.504 20:58:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:56.504 20:58:17 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@18 -- # shift 00:11:56.504 20:58:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:56.504 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.504 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.504 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:56.505 20:58:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:56.505 20:58:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:56.505 20:58:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:56.505 20:58:17 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@18 -- # shift 00:11:56.505 20:58:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:56.505 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.505 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.505 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.506 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:56.506 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.506 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:56.507 20:58:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:56.507 20:58:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:56.507 20:58:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:56.507 20:58:17 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@18 -- # shift 00:11:56.507 20:58:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.507 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.507 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:56.507 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:56.508 20:58:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:56.508 20:58:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:56.508 20:58:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:56.508 20:58:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:56.508 20:58:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:56.508 20:58:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:56.508 20:58:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:56.508 20:58:17 -- scripts/common.sh@15 -- # local i 00:11:56.508 20:58:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:56.508 20:58:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:56.508 20:58:17 -- scripts/common.sh@24 -- # return 0 00:11:56.508 20:58:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:56.508 20:58:17 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:56.508 20:58:17 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@18 -- # shift 00:11:56.508 20:58:17 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.508 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:56.508 20:58:17 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:56.508 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:56.509 20:58:17 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.509 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.509 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.510 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:56.510 20:58:17 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:56.510 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:56.511 20:58:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:56.511 20:58:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:56.511 20:58:17 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:56.511 20:58:17 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@18 -- # shift 00:11:56.511 20:58:17 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.511 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.511 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:56.511 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:56.512 20:58:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.512 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:56.512 20:58:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:56.512 20:58:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:56.512 20:58:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:56.512 20:58:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:56.512 20:58:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:56.512 20:58:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:56.512 20:58:17 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:56.512 20:58:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:56.512 20:58:17 -- scripts/common.sh@15 -- # local i 00:11:56.512 20:58:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:56.512 20:58:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:56.512 20:58:17 -- scripts/common.sh@24 -- # return 0 00:11:56.512 20:58:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:56.512 20:58:17 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:56.512 20:58:17 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:56.512 20:58:17 -- nvme/functions.sh@18 -- # shift 00:11:56.513 20:58:17 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.513 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:56.513 20:58:17 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.513 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.514 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.514 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:56.514 20:58:17 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.515 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:56.515 20:58:17 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.515 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.775 20:58:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:56.775 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:56.775 20:58:17 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:56.775 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.775 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.775 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.775 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:56.775 20:58:17 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:56.775 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.775 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.775 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.775 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:56.775 20:58:17 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:56.775 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.775 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.775 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:56.776 20:58:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:56.776 20:58:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:56.776 20:58:17 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:56.776 20:58:17 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@18 -- # shift 00:11:56.776 20:58:17 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:56.776 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.776 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.776 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:56.777 20:58:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # IFS=: 00:11:56.777 20:58:17 -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.777 20:58:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:56.777 20:58:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:56.777 20:58:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:56.777 20:58:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:56.777 20:58:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:56.777 20:58:17 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:56.777 20:58:17 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:56.777 20:58:17 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:56.777 20:58:17 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:56.777 20:58:17 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:56.777 20:58:17 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:56.777 20:58:17 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:56.777 20:58:17 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:56.777 20:58:17 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:56.777 20:58:17 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:56.777 20:58:17 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:56.777 20:58:17 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:56.777 20:58:17 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:56.777 20:58:17 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:56.777 20:58:17 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:56.777 20:58:17 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:56.777 20:58:17 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:56.777 20:58:17 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:56.777 20:58:17 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:56.777 20:58:17 -- nvme/functions.sh@197 -- # echo nvme1 00:11:56.777 20:58:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:56.777 20:58:17 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:56.777 20:58:17 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:56.777 20:58:17 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:56.777 20:58:17 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:56.777 20:58:17 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:56.777 20:58:17 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:56.777 20:58:17 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:56.777 20:58:17 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:56.777 20:58:17 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:56.777 20:58:17 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:56.777 20:58:17 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:56.777 20:58:17 -- nvme/functions.sh@197 -- # echo nvme0 00:11:56.777 20:58:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:56.777 20:58:17 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:56.777 20:58:17 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:56.777 20:58:17 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:56.777 20:58:17 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:56.777 20:58:17 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:56.777 20:58:17 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:56.777 20:58:17 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:56.778 20:58:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:56.778 20:58:17 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:56.778 20:58:17 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:56.778 20:58:17 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:56.778 20:58:17 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:56.778 20:58:17 -- nvme/functions.sh@197 -- # echo nvme3 00:11:56.778 20:58:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:56.778 20:58:17 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:56.778 20:58:17 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:56.778 20:58:17 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:56.778 20:58:17 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:56.778 20:58:17 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:56.778 20:58:17 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:56.778 20:58:17 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:56.778 20:58:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:56.778 20:58:17 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:56.778 20:58:17 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:56.778 20:58:17 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:56.778 20:58:17 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:56.778 20:58:17 -- nvme/functions.sh@197 -- # echo nvme2 00:11:56.778 20:58:17 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:56.778 20:58:17 -- nvme/functions.sh@206 -- # echo nvme1 00:11:56.778 20:58:17 -- nvme/functions.sh@207 -- # return 0 00:11:56.778 20:58:17 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:56.778 20:58:17 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:56.778 20:58:17 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:57.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:57.715 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.715 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.715 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.974 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.974 20:58:18 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:57.974 20:58:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:57.974 20:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:57.974 20:58:18 -- common/autotest_common.sh@10 -- # set +x 00:11:57.974 ************************************ 00:11:57.974 START TEST nvme_simple_copy 00:11:57.974 ************************************ 00:11:57.974 20:58:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:58.233 Initializing NVMe Controllers 00:11:58.233 Attaching to 0000:00:08.0 00:11:58.233 Controller supports SCC. Attached to 0000:00:08.0 00:11:58.233 Namespace ID: 1 size: 4GB 00:11:58.233 Initialization complete. 00:11:58.233 00:11:58.233 Controller QEMU NVMe Ctrl (12342 ) 00:11:58.233 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:58.233 Namespace Block Size:4096 00:11:58.233 Writing LBAs 0 to 63 with Random Data 00:11:58.233 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:58.233 LBAs matching Written Data: 64 00:11:58.233 00:11:58.233 real 0m0.316s 00:11:58.233 user 0m0.114s 00:11:58.233 sys 0m0.100s 00:11:58.233 20:58:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:58.233 20:58:19 -- common/autotest_common.sh@10 -- # set +x 00:11:58.233 ************************************ 00:11:58.233 END TEST nvme_simple_copy 00:11:58.233 ************************************ 00:11:58.233 00:11:58.233 real 0m8.294s 00:11:58.233 user 0m1.553s 00:11:58.233 sys 0m1.741s 00:11:58.233 20:58:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:58.233 20:58:19 -- common/autotest_common.sh@10 -- # set +x 00:11:58.233 ************************************ 00:11:58.233 END TEST nvme_scc 00:11:58.233 ************************************ 00:11:58.492 20:58:19 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:11:58.492 20:58:19 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:58.492 20:58:19 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:11:58.492 20:58:19 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:11:58.492 20:58:19 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:58.492 20:58:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:58.492 20:58:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:58.492 20:58:19 -- common/autotest_common.sh@10 -- # set +x 00:11:58.492 ************************************ 00:11:58.492 START TEST nvme_fdp 00:11:58.492 ************************************ 00:11:58.492 20:58:19 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:11:58.492 * Looking for test storage... 00:11:58.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:58.492 20:58:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:58.492 20:58:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:58.492 20:58:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:58.492 20:58:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:58.492 20:58:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:58.492 20:58:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:58.492 20:58:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:58.492 20:58:19 -- scripts/common.sh@335 -- # IFS=.-: 00:11:58.492 20:58:19 -- scripts/common.sh@335 -- # read -ra ver1 00:11:58.492 20:58:19 -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.492 20:58:19 -- scripts/common.sh@336 -- # read -ra ver2 00:11:58.492 20:58:19 -- scripts/common.sh@337 -- # local 'op=<' 00:11:58.492 20:58:19 -- scripts/common.sh@339 -- # ver1_l=2 00:11:58.492 20:58:19 -- scripts/common.sh@340 -- # ver2_l=1 00:11:58.492 20:58:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:58.492 20:58:19 -- scripts/common.sh@343 -- # case "$op" in 00:11:58.492 20:58:19 -- scripts/common.sh@344 -- # : 1 00:11:58.492 20:58:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:58.492 20:58:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.492 20:58:19 -- scripts/common.sh@364 -- # decimal 1 00:11:58.492 20:58:19 -- scripts/common.sh@352 -- # local d=1 00:11:58.492 20:58:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.492 20:58:19 -- scripts/common.sh@354 -- # echo 1 00:11:58.492 20:58:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:58.492 20:58:19 -- scripts/common.sh@365 -- # decimal 2 00:11:58.492 20:58:19 -- scripts/common.sh@352 -- # local d=2 00:11:58.492 20:58:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.492 20:58:19 -- scripts/common.sh@354 -- # echo 2 00:11:58.492 20:58:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:58.492 20:58:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:58.492 20:58:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:58.492 20:58:19 -- scripts/common.sh@367 -- # return 0 00:11:58.492 20:58:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.492 20:58:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:58.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.492 --rc genhtml_branch_coverage=1 00:11:58.492 --rc genhtml_function_coverage=1 00:11:58.492 --rc genhtml_legend=1 00:11:58.492 --rc geninfo_all_blocks=1 00:11:58.492 --rc geninfo_unexecuted_blocks=1 00:11:58.492 00:11:58.492 ' 00:11:58.492 20:58:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:58.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.492 --rc genhtml_branch_coverage=1 00:11:58.492 --rc genhtml_function_coverage=1 00:11:58.492 --rc genhtml_legend=1 00:11:58.492 --rc geninfo_all_blocks=1 00:11:58.492 --rc geninfo_unexecuted_blocks=1 00:11:58.492 00:11:58.492 ' 00:11:58.492 20:58:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:58.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.492 --rc genhtml_branch_coverage=1 00:11:58.492 --rc genhtml_function_coverage=1 00:11:58.492 --rc genhtml_legend=1 00:11:58.492 --rc geninfo_all_blocks=1 00:11:58.492 --rc geninfo_unexecuted_blocks=1 00:11:58.492 00:11:58.492 ' 00:11:58.492 20:58:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:58.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.492 --rc genhtml_branch_coverage=1 00:11:58.492 --rc genhtml_function_coverage=1 00:11:58.492 --rc genhtml_legend=1 00:11:58.492 --rc geninfo_all_blocks=1 00:11:58.492 --rc geninfo_unexecuted_blocks=1 00:11:58.492 00:11:58.492 ' 00:11:58.492 20:58:19 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:58.492 20:58:19 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:58.492 20:58:19 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:58.492 20:58:19 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:58.492 20:58:19 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.492 20:58:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.492 20:58:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.492 20:58:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.492 20:58:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.492 20:58:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.492 20:58:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.492 20:58:19 -- paths/export.sh@5 -- # export PATH 00:11:58.492 20:58:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.492 20:58:19 -- nvme/functions.sh@10 -- # ctrls=() 00:11:58.492 20:58:19 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:58.493 20:58:19 -- nvme/functions.sh@11 -- # nvmes=() 00:11:58.493 20:58:19 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:58.493 20:58:19 -- nvme/functions.sh@12 -- # bdfs=() 00:11:58.493 20:58:19 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:58.493 20:58:19 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:58.493 20:58:19 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:58.493 20:58:19 -- nvme/functions.sh@14 -- # nvme_name= 00:11:58.493 20:58:19 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:58.493 20:58:19 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:59.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:59.062 Waiting for block devices as requested 00:11:59.328 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:59.328 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:59.328 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:59.603 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.895 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:04.895 20:58:25 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:04.895 20:58:25 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:04.895 20:58:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:04.895 20:58:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:12:04.895 20:58:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:12:04.895 20:58:25 -- scripts/common.sh@15 -- # local i 00:12:04.895 20:58:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:12:04.895 20:58:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:04.895 20:58:25 -- scripts/common.sh@24 -- # return 0 00:12:04.895 20:58:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:04.895 20:58:25 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:04.895 20:58:25 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@18 -- # shift 00:12:04.895 20:58:25 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:04.895 20:58:25 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.895 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.895 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.896 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:12:04.896 20:58:25 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:12:04.896 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.897 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.897 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:04.897 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:04.898 20:58:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:04.898 20:58:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:04.898 20:58:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:12:04.898 20:58:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:04.898 20:58:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:04.898 20:58:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:12:04.898 20:58:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:12:04.898 20:58:25 -- scripts/common.sh@15 -- # local i 00:12:04.898 20:58:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:12:04.898 20:58:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:04.898 20:58:25 -- scripts/common.sh@24 -- # return 0 00:12:04.898 20:58:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:04.898 20:58:25 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:04.898 20:58:25 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@18 -- # shift 00:12:04.898 20:58:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.898 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:04.898 20:58:25 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.898 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.899 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:04.899 20:58:25 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.899 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.900 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.900 20:58:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:04.900 20:58:25 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:04.901 20:58:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.901 20:58:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:04.901 20:58:25 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:04.901 20:58:25 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@18 -- # shift 00:12:04.901 20:58:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:04.901 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.901 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.901 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:04.902 20:58:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.902 20:58:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:12:04.902 20:58:25 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:12:04.902 20:58:25 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@18 -- # shift 00:12:04.902 20:58:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.902 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.902 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:12:04.902 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:12:04.903 20:58:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.903 20:58:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:12:04.903 20:58:25 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:12:04.903 20:58:25 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@18 -- # shift 00:12:04.903 20:58:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:12:04.903 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.903 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.903 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.904 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:12:04.904 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.904 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:12:04.905 20:58:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:04.905 20:58:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:04.905 20:58:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:12:04.905 20:58:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:04.905 20:58:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:04.905 20:58:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:12:04.905 20:58:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:12:04.905 20:58:25 -- scripts/common.sh@15 -- # local i 00:12:04.905 20:58:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:12:04.905 20:58:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:04.905 20:58:25 -- scripts/common.sh@24 -- # return 0 00:12:04.905 20:58:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:04.905 20:58:25 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:04.905 20:58:25 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@18 -- # shift 00:12:04.905 20:58:25 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.905 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.905 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:04.905 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.906 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.906 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.906 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.907 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:04.907 20:58:25 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.907 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:04.908 20:58:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.908 20:58:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:04.908 20:58:25 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:04.908 20:58:25 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@18 -- # shift 00:12:04.908 20:58:25 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.908 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.908 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:04.908 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:04.909 20:58:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:04.909 20:58:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:04.909 20:58:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:12:04.909 20:58:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:04.909 20:58:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:04.909 20:58:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:12:04.909 20:58:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:12:04.909 20:58:25 -- scripts/common.sh@15 -- # local i 00:12:04.909 20:58:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:12:04.909 20:58:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:04.909 20:58:25 -- scripts/common.sh@24 -- # return 0 00:12:04.909 20:58:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:04.909 20:58:25 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:04.909 20:58:25 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@18 -- # shift 00:12:04.909 20:58:25 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.909 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.909 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:04.909 20:58:25 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.910 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.910 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.910 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.911 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:04.911 20:58:25 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.911 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:04.912 20:58:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.912 20:58:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:12:04.912 20:58:25 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:12:04.912 20:58:25 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@18 -- # shift 00:12:04.912 20:58:25 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.912 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.912 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:12:04.912 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:04.913 20:58:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.913 20:58:25 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.913 20:58:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:12:04.913 20:58:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:04.913 20:58:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:04.913 20:58:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:12:04.913 20:58:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:04.913 20:58:25 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:04.913 20:58:25 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:04.913 20:58:25 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:04.913 20:58:25 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:04.913 20:58:25 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:04.913 20:58:25 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:04.913 20:58:25 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:04.913 20:58:25 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:04.913 20:58:25 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.913 20:58:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:04.913 20:58:25 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:04.913 20:58:25 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:04.913 20:58:25 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:04.913 20:58:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:04.913 20:58:25 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:04.913 20:58:25 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:04.913 20:58:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:04.913 20:58:25 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:04.914 20:58:25 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:04.914 20:58:25 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:04.914 20:58:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.914 20:58:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.914 20:58:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:04.914 20:58:25 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:04.914 20:58:25 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:04.914 20:58:25 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:04.914 20:58:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:04.914 20:58:25 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:04.914 20:58:25 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:04.914 20:58:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:04.914 20:58:25 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:04.914 20:58:25 -- nvme/functions.sh@76 -- # echo 0x88010 00:12:04.914 20:58:25 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:04.914 20:58:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.914 20:58:25 -- nvme/functions.sh@197 -- # echo nvme0 00:12:04.914 20:58:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.914 20:58:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:04.914 20:58:25 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:04.914 20:58:25 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:04.914 20:58:25 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:04.914 20:58:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:04.914 20:58:25 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:04.914 20:58:25 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:04.914 20:58:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:04.914 20:58:25 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:04.914 20:58:25 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:04.914 20:58:25 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:04.914 20:58:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.914 20:58:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.914 20:58:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:04.914 20:58:25 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:04.914 20:58:25 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:04.914 20:58:25 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:04.914 20:58:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:04.914 20:58:25 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:04.914 20:58:25 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:04.914 20:58:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:04.914 20:58:25 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:04.914 20:58:25 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:04.914 20:58:25 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:04.914 20:58:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.914 20:58:25 -- nvme/functions.sh@204 -- # trap - ERR 00:12:04.914 20:58:25 -- nvme/functions.sh@204 -- # print_backtrace 00:12:04.914 20:58:25 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:12:04.914 20:58:25 -- common/autotest_common.sh@1142 -- # return 0 00:12:04.914 20:58:25 -- nvme/functions.sh@204 -- # trap - ERR 00:12:04.914 20:58:25 -- nvme/functions.sh@204 -- # print_backtrace 00:12:04.914 20:58:25 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:12:04.914 20:58:25 -- common/autotest_common.sh@1142 -- # return 0 00:12:04.914 20:58:25 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:04.914 20:58:25 -- nvme/functions.sh@206 -- # echo nvme0 00:12:04.914 20:58:25 -- nvme/functions.sh@207 -- # return 0 00:12:04.914 20:58:25 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:12:04.914 20:58:25 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:12:04.914 20:58:25 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:05.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:06.108 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.108 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.109 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.109 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.109 20:58:27 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:12:06.109 20:58:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:06.109 20:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:06.109 20:58:27 -- common/autotest_common.sh@10 -- # set +x 00:12:06.368 ************************************ 00:12:06.368 START TEST nvme_flexible_data_placement 00:12:06.368 ************************************ 00:12:06.368 20:58:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:12:06.627 Initializing NVMe Controllers 00:12:06.627 Attaching to 0000:00:09.0 00:12:06.627 Controller supports FDP Attached to 0000:00:09.0 00:12:06.627 Namespace ID: 1 Endurance Group ID: 1 00:12:06.627 Initialization complete. 00:12:06.627 00:12:06.627 ================================== 00:12:06.627 == FDP tests for Namespace: #01 == 00:12:06.627 ================================== 00:12:06.627 00:12:06.627 Get Feature: FDP: 00:12:06.627 ================= 00:12:06.627 Enabled: Yes 00:12:06.627 FDP configuration Index: 0 00:12:06.627 00:12:06.627 FDP configurations log page 00:12:06.627 =========================== 00:12:06.627 Number of FDP configurations: 1 00:12:06.627 Version: 0 00:12:06.627 Size: 112 00:12:06.627 FDP Configuration Descriptor: 0 00:12:06.627 Descriptor Size: 96 00:12:06.627 Reclaim Group Identifier format: 2 00:12:06.627 FDP Volatile Write Cache: Not Present 00:12:06.627 FDP Configuration: Valid 00:12:06.627 Vendor Specific Size: 0 00:12:06.627 Number of Reclaim Groups: 2 00:12:06.627 Number of Recalim Unit Handles: 8 00:12:06.627 Max Placement Identifiers: 128 00:12:06.627 Number of Namespaces Suppprted: 256 00:12:06.627 Reclaim unit Nominal Size: 6000000 bytes 00:12:06.627 Estimated Reclaim Unit Time Limit: Not Reported 00:12:06.627 RUH Desc #000: RUH Type: Initially Isolated 00:12:06.627 RUH Desc #001: RUH Type: Initially Isolated 00:12:06.627 RUH Desc #002: RUH Type: Initially Isolated 00:12:06.627 RUH Desc #003: RUH Type: Initially Isolated 00:12:06.627 RUH Desc #004: RUH Type: Initially Isolated 00:12:06.627 RUH Desc #005: RUH Type: Initially Isolated 00:12:06.627 RUH Desc #006: RUH Type: Initially Isolated 00:12:06.627 RUH Desc #007: RUH Type: Initially Isolated 00:12:06.627 00:12:06.627 FDP reclaim unit handle usage log page 00:12:06.627 ====================================== 00:12:06.627 Number of Reclaim Unit Handles: 8 00:12:06.627 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:06.627 RUH Usage Desc #001: RUH Attributes: Unused 00:12:06.627 RUH Usage Desc #002: RUH Attributes: Unused 00:12:06.627 RUH Usage Desc #003: RUH Attributes: Unused 00:12:06.627 RUH Usage Desc #004: RUH Attributes: Unused 00:12:06.627 RUH Usage Desc #005: RUH Attributes: Unused 00:12:06.627 RUH Usage Desc #006: RUH Attributes: Unused 00:12:06.627 RUH Usage Desc #007: RUH Attributes: Unused 00:12:06.627 00:12:06.627 FDP statistics log page 00:12:06.627 ======================= 00:12:06.627 Host bytes with metadata written: 764567552 00:12:06.627 Media bytes with metadata written: 764657664 00:12:06.627 Media bytes erased: 0 00:12:06.627 00:12:06.627 FDP Reclaim unit handle status 00:12:06.627 ============================== 00:12:06.627 Number of RUHS descriptors: 2 00:12:06.627 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000026da 00:12:06.627 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:06.627 00:12:06.627 FDP write on placement id: 0 success 00:12:06.627 00:12:06.627 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:06.627 00:12:06.627 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:06.627 00:12:06.627 Get Feature: FDP Events for Placement handle: #0 00:12:06.627 ======================== 00:12:06.627 Number of FDP Events: 6 00:12:06.627 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:06.628 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:06.628 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:06.628 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:06.628 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:06.628 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:06.628 00:12:06.628 FDP events log page 00:12:06.628 =================== 00:12:06.628 Number of FDP events: 1 00:12:06.628 FDP Event #0: 00:12:06.628 Event Type: RU Not Written to Capacity 00:12:06.628 Placement Identifier: Valid 00:12:06.628 NSID: Valid 00:12:06.628 Location: Valid 00:12:06.628 Placement Identifier: 0 00:12:06.628 Event Timestamp: c 00:12:06.628 Namespace Identifier: 1 00:12:06.628 Reclaim Group Identifier: 0 00:12:06.628 Reclaim Unit Handle Identifier: 0 00:12:06.628 00:12:06.628 FDP test passed 00:12:06.628 00:12:06.628 real 0m0.285s 00:12:06.628 user 0m0.103s 00:12:06.628 sys 0m0.080s 00:12:06.628 20:58:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:06.628 20:58:27 -- common/autotest_common.sh@10 -- # set +x 00:12:06.628 ************************************ 00:12:06.628 END TEST nvme_flexible_data_placement 00:12:06.628 ************************************ 00:12:06.628 00:12:06.628 real 0m8.162s 00:12:06.628 user 0m1.410s 00:12:06.628 sys 0m1.780s 00:12:06.628 20:58:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:06.628 ************************************ 00:12:06.628 20:58:27 -- common/autotest_common.sh@10 -- # set +x 00:12:06.628 END TEST nvme_fdp 00:12:06.628 ************************************ 00:12:06.628 20:58:27 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:12:06.628 20:58:27 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:06.628 20:58:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:06.628 20:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:06.628 20:58:27 -- common/autotest_common.sh@10 -- # set +x 00:12:06.628 ************************************ 00:12:06.628 START TEST nvme_rpc 00:12:06.628 ************************************ 00:12:06.628 20:58:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:06.628 * Looking for test storage... 00:12:06.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:06.628 20:58:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:06.628 20:58:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:06.628 20:58:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:06.888 20:58:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:06.888 20:58:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:06.888 20:58:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:06.888 20:58:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:06.888 20:58:27 -- scripts/common.sh@335 -- # IFS=.-: 00:12:06.888 20:58:27 -- scripts/common.sh@335 -- # read -ra ver1 00:12:06.888 20:58:27 -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.888 20:58:27 -- scripts/common.sh@336 -- # read -ra ver2 00:12:06.888 20:58:27 -- scripts/common.sh@337 -- # local 'op=<' 00:12:06.888 20:58:27 -- scripts/common.sh@339 -- # ver1_l=2 00:12:06.888 20:58:27 -- scripts/common.sh@340 -- # ver2_l=1 00:12:06.888 20:58:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:06.888 20:58:27 -- scripts/common.sh@343 -- # case "$op" in 00:12:06.888 20:58:27 -- scripts/common.sh@344 -- # : 1 00:12:06.888 20:58:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:06.888 20:58:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.888 20:58:27 -- scripts/common.sh@364 -- # decimal 1 00:12:06.888 20:58:27 -- scripts/common.sh@352 -- # local d=1 00:12:06.888 20:58:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.888 20:58:27 -- scripts/common.sh@354 -- # echo 1 00:12:06.888 20:58:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:06.888 20:58:27 -- scripts/common.sh@365 -- # decimal 2 00:12:06.888 20:58:27 -- scripts/common.sh@352 -- # local d=2 00:12:06.888 20:58:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.888 20:58:27 -- scripts/common.sh@354 -- # echo 2 00:12:06.888 20:58:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:06.888 20:58:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:06.888 20:58:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:06.888 20:58:27 -- scripts/common.sh@367 -- # return 0 00:12:06.888 20:58:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.888 20:58:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:06.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.888 --rc genhtml_branch_coverage=1 00:12:06.888 --rc genhtml_function_coverage=1 00:12:06.888 --rc genhtml_legend=1 00:12:06.888 --rc geninfo_all_blocks=1 00:12:06.888 --rc geninfo_unexecuted_blocks=1 00:12:06.888 00:12:06.888 ' 00:12:06.888 20:58:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:06.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.888 --rc genhtml_branch_coverage=1 00:12:06.888 --rc genhtml_function_coverage=1 00:12:06.888 --rc genhtml_legend=1 00:12:06.888 --rc geninfo_all_blocks=1 00:12:06.888 --rc geninfo_unexecuted_blocks=1 00:12:06.888 00:12:06.888 ' 00:12:06.888 20:58:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:06.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.888 --rc genhtml_branch_coverage=1 00:12:06.888 --rc genhtml_function_coverage=1 00:12:06.888 --rc genhtml_legend=1 00:12:06.888 --rc geninfo_all_blocks=1 00:12:06.888 --rc geninfo_unexecuted_blocks=1 00:12:06.888 00:12:06.888 ' 00:12:06.888 20:58:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:06.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.888 --rc genhtml_branch_coverage=1 00:12:06.888 --rc genhtml_function_coverage=1 00:12:06.888 --rc genhtml_legend=1 00:12:06.888 --rc geninfo_all_blocks=1 00:12:06.888 --rc geninfo_unexecuted_blocks=1 00:12:06.888 00:12:06.888 ' 00:12:06.888 20:58:27 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:06.888 20:58:27 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:06.888 20:58:27 -- common/autotest_common.sh@1519 -- # bdfs=() 00:12:06.888 20:58:27 -- common/autotest_common.sh@1519 -- # local bdfs 00:12:06.888 20:58:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:12:06.888 20:58:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:12:06.888 20:58:27 -- common/autotest_common.sh@1508 -- # bdfs=() 00:12:06.888 20:58:27 -- common/autotest_common.sh@1508 -- # local bdfs 00:12:06.888 20:58:27 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:06.888 20:58:27 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:06.888 20:58:27 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:12:06.888 20:58:27 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:12:06.888 20:58:27 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:06.888 20:58:27 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:12:06.888 20:58:27 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:12:06.888 20:58:27 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67062 00:12:06.888 20:58:27 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:06.888 20:58:27 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:06.888 20:58:27 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67062 00:12:06.888 20:58:27 -- common/autotest_common.sh@829 -- # '[' -z 67062 ']' 00:12:06.888 20:58:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.888 20:58:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.888 20:58:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.888 20:58:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.888 20:58:27 -- common/autotest_common.sh@10 -- # set +x 00:12:06.888 [2024-12-08 20:58:27.902937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:06.888 [2024-12-08 20:58:27.903139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67062 ] 00:12:07.148 [2024-12-08 20:58:28.076188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:07.407 [2024-12-08 20:58:28.302548] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:07.407 [2024-12-08 20:58:28.302954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.407 [2024-12-08 20:58:28.302965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.786 20:58:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.786 20:58:29 -- common/autotest_common.sh@862 -- # return 0 00:12:08.786 20:58:29 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:12:09.045 Nvme0n1 00:12:09.045 20:58:29 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:09.045 20:58:29 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:09.304 request: 00:12:09.304 { 00:12:09.304 "filename": "non_existing_file", 00:12:09.304 "bdev_name": "Nvme0n1", 00:12:09.304 "method": "bdev_nvme_apply_firmware", 00:12:09.304 "req_id": 1 00:12:09.304 } 00:12:09.304 Got JSON-RPC error response 00:12:09.304 response: 00:12:09.304 { 00:12:09.304 "code": -32603, 00:12:09.304 "message": "open file failed." 00:12:09.304 } 00:12:09.304 20:58:30 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:09.304 20:58:30 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:09.304 20:58:30 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:09.304 20:58:30 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:09.304 20:58:30 -- nvme/nvme_rpc.sh@40 -- # killprocess 67062 00:12:09.304 20:58:30 -- common/autotest_common.sh@936 -- # '[' -z 67062 ']' 00:12:09.304 20:58:30 -- common/autotest_common.sh@940 -- # kill -0 67062 00:12:09.304 20:58:30 -- common/autotest_common.sh@941 -- # uname 00:12:09.304 20:58:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:09.304 20:58:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67062 00:12:09.563 20:58:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:09.563 20:58:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:09.564 20:58:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67062' 00:12:09.564 killing process with pid 67062 00:12:09.564 20:58:30 -- common/autotest_common.sh@955 -- # kill 67062 00:12:09.564 20:58:30 -- common/autotest_common.sh@960 -- # wait 67062 00:12:10.942 00:12:10.942 real 0m4.417s 00:12:10.942 user 0m8.541s 00:12:10.942 sys 0m0.608s 00:12:10.942 20:58:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:10.942 ************************************ 00:12:10.942 END TEST nvme_rpc 00:12:10.942 ************************************ 00:12:10.942 20:58:31 -- common/autotest_common.sh@10 -- # set +x 00:12:11.201 20:58:31 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:11.201 20:58:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:11.201 20:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.201 20:58:31 -- common/autotest_common.sh@10 -- # set +x 00:12:11.201 ************************************ 00:12:11.201 START TEST nvme_rpc_timeouts 00:12:11.201 ************************************ 00:12:11.201 20:58:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:11.201 * Looking for test storage... 00:12:11.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:11.201 20:58:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:11.201 20:58:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:11.201 20:58:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:11.201 20:58:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:11.201 20:58:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:11.201 20:58:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:11.201 20:58:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:11.202 20:58:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:11.202 20:58:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:11.202 20:58:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.202 20:58:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:11.202 20:58:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:11.202 20:58:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:11.202 20:58:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:11.202 20:58:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:11.202 20:58:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:11.202 20:58:32 -- scripts/common.sh@344 -- # : 1 00:12:11.202 20:58:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:11.202 20:58:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.202 20:58:32 -- scripts/common.sh@364 -- # decimal 1 00:12:11.202 20:58:32 -- scripts/common.sh@352 -- # local d=1 00:12:11.202 20:58:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.202 20:58:32 -- scripts/common.sh@354 -- # echo 1 00:12:11.202 20:58:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:11.202 20:58:32 -- scripts/common.sh@365 -- # decimal 2 00:12:11.202 20:58:32 -- scripts/common.sh@352 -- # local d=2 00:12:11.202 20:58:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.202 20:58:32 -- scripts/common.sh@354 -- # echo 2 00:12:11.202 20:58:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:11.202 20:58:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:11.202 20:58:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:11.202 20:58:32 -- scripts/common.sh@367 -- # return 0 00:12:11.202 20:58:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.202 20:58:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.202 --rc genhtml_branch_coverage=1 00:12:11.202 --rc genhtml_function_coverage=1 00:12:11.202 --rc genhtml_legend=1 00:12:11.202 --rc geninfo_all_blocks=1 00:12:11.202 --rc geninfo_unexecuted_blocks=1 00:12:11.202 00:12:11.202 ' 00:12:11.202 20:58:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.202 --rc genhtml_branch_coverage=1 00:12:11.202 --rc genhtml_function_coverage=1 00:12:11.202 --rc genhtml_legend=1 00:12:11.202 --rc geninfo_all_blocks=1 00:12:11.202 --rc geninfo_unexecuted_blocks=1 00:12:11.202 00:12:11.202 ' 00:12:11.202 20:58:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.202 --rc genhtml_branch_coverage=1 00:12:11.202 --rc genhtml_function_coverage=1 00:12:11.202 --rc genhtml_legend=1 00:12:11.202 --rc geninfo_all_blocks=1 00:12:11.202 --rc geninfo_unexecuted_blocks=1 00:12:11.202 00:12:11.202 ' 00:12:11.202 20:58:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.202 --rc genhtml_branch_coverage=1 00:12:11.202 --rc genhtml_function_coverage=1 00:12:11.202 --rc genhtml_legend=1 00:12:11.202 --rc geninfo_all_blocks=1 00:12:11.202 --rc geninfo_unexecuted_blocks=1 00:12:11.202 00:12:11.202 ' 00:12:11.202 20:58:32 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:11.202 20:58:32 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67140 00:12:11.202 20:58:32 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67140 00:12:11.202 20:58:32 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67171 00:12:11.202 20:58:32 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:11.202 20:58:32 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67171 00:12:11.202 20:58:32 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:11.202 20:58:32 -- common/autotest_common.sh@829 -- # '[' -z 67171 ']' 00:12:11.202 20:58:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.202 20:58:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.202 20:58:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.202 20:58:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.202 20:58:32 -- common/autotest_common.sh@10 -- # set +x 00:12:11.202 [2024-12-08 20:58:32.230574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:11.202 [2024-12-08 20:58:32.230733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67171 ] 00:12:11.461 [2024-12-08 20:58:32.378607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:11.721 [2024-12-08 20:58:32.520692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:11.721 [2024-12-08 20:58:32.521130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.721 [2024-12-08 20:58:32.521133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.289 20:58:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.289 Checking default timeout settings: 00:12:12.289 20:58:33 -- common/autotest_common.sh@862 -- # return 0 00:12:12.289 20:58:33 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:12.289 20:58:33 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:12.548 Making settings changes with rpc: 00:12:12.548 20:58:33 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:12.548 20:58:33 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:12.808 Check default vs. modified settings: 00:12:12.808 20:58:33 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:12.808 20:58:33 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67140 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67140 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:13.068 Setting action_on_timeout is changed as expected. 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67140 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67140 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:13.068 Setting timeout_us is changed as expected. 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67140 00:12:13.068 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:13.326 20:58:34 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:13.327 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:13.327 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67140 00:12:13.327 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.327 20:58:34 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:13.327 20:58:34 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:13.327 20:58:34 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:13.327 Setting timeout_admin_us is changed as expected. 00:12:13.327 20:58:34 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:13.327 20:58:34 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67140 /tmp/settings_modified_67140 00:12:13.327 20:58:34 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67171 00:12:13.327 20:58:34 -- common/autotest_common.sh@936 -- # '[' -z 67171 ']' 00:12:13.327 20:58:34 -- common/autotest_common.sh@940 -- # kill -0 67171 00:12:13.327 20:58:34 -- common/autotest_common.sh@941 -- # uname 00:12:13.327 20:58:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:13.327 20:58:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67171 00:12:13.327 20:58:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:13.327 killing process with pid 67171 00:12:13.327 20:58:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:13.327 20:58:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67171' 00:12:13.327 20:58:34 -- common/autotest_common.sh@955 -- # kill 67171 00:12:13.327 20:58:34 -- common/autotest_common.sh@960 -- # wait 67171 00:12:15.240 RPC TIMEOUT SETTING TEST PASSED. 00:12:15.240 20:58:35 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:15.240 00:12:15.240 real 0m3.786s 00:12:15.240 user 0m7.358s 00:12:15.240 sys 0m0.537s 00:12:15.240 20:58:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:15.240 ************************************ 00:12:15.240 20:58:35 -- common/autotest_common.sh@10 -- # set +x 00:12:15.240 END TEST nvme_rpc_timeouts 00:12:15.240 ************************************ 00:12:15.240 20:58:35 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:12:15.240 20:58:35 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:12:15.240 20:58:35 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:15.240 20:58:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:15.240 20:58:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:15.240 20:58:35 -- common/autotest_common.sh@10 -- # set +x 00:12:15.240 ************************************ 00:12:15.240 START TEST nvme_xnvme 00:12:15.240 ************************************ 00:12:15.240 20:58:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:15.240 * Looking for test storage... 00:12:15.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:15.240 20:58:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:15.240 20:58:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:15.240 20:58:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:15.240 20:58:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:15.240 20:58:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:15.240 20:58:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:15.240 20:58:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:15.240 20:58:35 -- scripts/common.sh@335 -- # IFS=.-: 00:12:15.240 20:58:35 -- scripts/common.sh@335 -- # read -ra ver1 00:12:15.240 20:58:35 -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.240 20:58:35 -- scripts/common.sh@336 -- # read -ra ver2 00:12:15.240 20:58:35 -- scripts/common.sh@337 -- # local 'op=<' 00:12:15.241 20:58:35 -- scripts/common.sh@339 -- # ver1_l=2 00:12:15.241 20:58:35 -- scripts/common.sh@340 -- # ver2_l=1 00:12:15.241 20:58:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:15.241 20:58:35 -- scripts/common.sh@343 -- # case "$op" in 00:12:15.241 20:58:35 -- scripts/common.sh@344 -- # : 1 00:12:15.241 20:58:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:15.241 20:58:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.241 20:58:35 -- scripts/common.sh@364 -- # decimal 1 00:12:15.241 20:58:35 -- scripts/common.sh@352 -- # local d=1 00:12:15.241 20:58:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.241 20:58:35 -- scripts/common.sh@354 -- # echo 1 00:12:15.241 20:58:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:15.241 20:58:36 -- scripts/common.sh@365 -- # decimal 2 00:12:15.241 20:58:36 -- scripts/common.sh@352 -- # local d=2 00:12:15.241 20:58:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.241 20:58:36 -- scripts/common.sh@354 -- # echo 2 00:12:15.241 20:58:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:15.241 20:58:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:15.241 20:58:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:15.241 20:58:36 -- scripts/common.sh@367 -- # return 0 00:12:15.241 20:58:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.241 20:58:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:15.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.241 --rc genhtml_branch_coverage=1 00:12:15.241 --rc genhtml_function_coverage=1 00:12:15.241 --rc genhtml_legend=1 00:12:15.241 --rc geninfo_all_blocks=1 00:12:15.241 --rc geninfo_unexecuted_blocks=1 00:12:15.241 00:12:15.241 ' 00:12:15.241 20:58:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:15.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.241 --rc genhtml_branch_coverage=1 00:12:15.241 --rc genhtml_function_coverage=1 00:12:15.241 --rc genhtml_legend=1 00:12:15.241 --rc geninfo_all_blocks=1 00:12:15.241 --rc geninfo_unexecuted_blocks=1 00:12:15.241 00:12:15.241 ' 00:12:15.241 20:58:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:15.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.241 --rc genhtml_branch_coverage=1 00:12:15.241 --rc genhtml_function_coverage=1 00:12:15.241 --rc genhtml_legend=1 00:12:15.241 --rc geninfo_all_blocks=1 00:12:15.241 --rc geninfo_unexecuted_blocks=1 00:12:15.241 00:12:15.241 ' 00:12:15.241 20:58:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:15.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.241 --rc genhtml_branch_coverage=1 00:12:15.241 --rc genhtml_function_coverage=1 00:12:15.241 --rc genhtml_legend=1 00:12:15.241 --rc geninfo_all_blocks=1 00:12:15.241 --rc geninfo_unexecuted_blocks=1 00:12:15.241 00:12:15.241 ' 00:12:15.241 20:58:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.241 20:58:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.241 20:58:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.241 20:58:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.241 20:58:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.241 20:58:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.241 20:58:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.241 20:58:36 -- paths/export.sh@5 -- # export PATH 00:12:15.241 20:58:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:15.241 20:58:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:15.241 20:58:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:15.241 20:58:36 -- common/autotest_common.sh@10 -- # set +x 00:12:15.241 ************************************ 00:12:15.241 START TEST xnvme_to_malloc_dd_copy 00:12:15.241 ************************************ 00:12:15.241 20:58:36 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:15.241 20:58:36 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:15.241 20:58:36 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:15.241 20:58:36 -- dd/common.sh@191 -- # return 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@18 -- # local io 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:15.241 20:58:36 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:15.241 20:58:36 -- dd/common.sh@31 -- # xtrace_disable 00:12:15.241 20:58:36 -- common/autotest_common.sh@10 -- # set +x 00:12:15.241 { 00:12:15.241 "subsystems": [ 00:12:15.241 { 00:12:15.241 "subsystem": "bdev", 00:12:15.241 "config": [ 00:12:15.241 { 00:12:15.241 "params": { 00:12:15.241 "block_size": 512, 00:12:15.241 "num_blocks": 2097152, 00:12:15.241 "name": "malloc0" 00:12:15.241 }, 00:12:15.241 "method": "bdev_malloc_create" 00:12:15.241 }, 00:12:15.241 { 00:12:15.241 "params": { 00:12:15.241 "io_mechanism": "libaio", 00:12:15.241 "filename": "/dev/nullb0", 00:12:15.241 "name": "null0" 00:12:15.241 }, 00:12:15.241 "method": "bdev_xnvme_create" 00:12:15.241 }, 00:12:15.241 { 00:12:15.241 "method": "bdev_wait_for_examine" 00:12:15.241 } 00:12:15.241 ] 00:12:15.241 } 00:12:15.241 ] 00:12:15.241 } 00:12:15.241 [2024-12-08 20:58:36.154601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:15.241 [2024-12-08 20:58:36.154758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67307 ] 00:12:15.500 [2024-12-08 20:58:36.323112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.500 [2024-12-08 20:58:36.463736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.404  [2024-12-08T20:58:39.825Z] Copying: 224/1024 [MB] (224 MBps) [2024-12-08T20:58:40.764Z] Copying: 448/1024 [MB] (224 MBps) [2024-12-08T20:58:41.701Z] Copying: 671/1024 [MB] (222 MBps) [2024-12-08T20:58:42.269Z] Copying: 895/1024 [MB] (224 MBps) [2024-12-08T20:58:44.803Z] Copying: 1024/1024 [MB] (average 223 MBps) 00:12:23.760 00:12:23.760 20:58:44 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:23.760 20:58:44 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:23.760 20:58:44 -- dd/common.sh@31 -- # xtrace_disable 00:12:23.760 20:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:23.760 { 00:12:23.760 "subsystems": [ 00:12:23.760 { 00:12:23.760 "subsystem": "bdev", 00:12:23.760 "config": [ 00:12:23.760 { 00:12:23.760 "params": { 00:12:23.760 "block_size": 512, 00:12:23.760 "num_blocks": 2097152, 00:12:23.760 "name": "malloc0" 00:12:23.760 }, 00:12:23.760 "method": "bdev_malloc_create" 00:12:23.760 }, 00:12:23.760 { 00:12:23.760 "params": { 00:12:23.760 "io_mechanism": "libaio", 00:12:23.760 "filename": "/dev/nullb0", 00:12:23.760 "name": "null0" 00:12:23.760 }, 00:12:23.760 "method": "bdev_xnvme_create" 00:12:23.760 }, 00:12:23.760 { 00:12:23.760 "method": "bdev_wait_for_examine" 00:12:23.760 } 00:12:23.760 ] 00:12:23.760 } 00:12:23.760 ] 00:12:23.760 } 00:12:23.760 [2024-12-08 20:58:44.779254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:23.760 [2024-12-08 20:58:44.779415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67404 ] 00:12:24.020 [2024-12-08 20:58:44.935052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.278 [2024-12-08 20:58:45.088335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.183  [2024-12-08T20:58:48.163Z] Copying: 233/1024 [MB] (233 MBps) [2024-12-08T20:58:49.099Z] Copying: 465/1024 [MB] (232 MBps) [2024-12-08T20:58:50.475Z] Copying: 698/1024 [MB] (232 MBps) [2024-12-08T20:58:50.475Z] Copying: 928/1024 [MB] (230 MBps) [2024-12-08T20:58:53.790Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:12:32.747 00:12:32.748 20:58:53 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:32.748 20:58:53 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:32.748 20:58:53 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:32.748 20:58:53 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:32.748 20:58:53 -- dd/common.sh@31 -- # xtrace_disable 00:12:32.748 20:58:53 -- common/autotest_common.sh@10 -- # set +x 00:12:32.748 { 00:12:32.748 "subsystems": [ 00:12:32.748 { 00:12:32.748 "subsystem": "bdev", 00:12:32.748 "config": [ 00:12:32.748 { 00:12:32.748 "params": { 00:12:32.748 "block_size": 512, 00:12:32.748 "num_blocks": 2097152, 00:12:32.748 "name": "malloc0" 00:12:32.748 }, 00:12:32.748 "method": "bdev_malloc_create" 00:12:32.748 }, 00:12:32.748 { 00:12:32.748 "params": { 00:12:32.748 "io_mechanism": "io_uring", 00:12:32.748 "filename": "/dev/nullb0", 00:12:32.748 "name": "null0" 00:12:32.748 }, 00:12:32.748 "method": "bdev_xnvme_create" 00:12:32.748 }, 00:12:32.748 { 00:12:32.748 "method": "bdev_wait_for_examine" 00:12:32.748 } 00:12:32.748 ] 00:12:32.748 } 00:12:32.748 ] 00:12:32.748 } 00:12:32.748 [2024-12-08 20:58:53.271773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:32.748 [2024-12-08 20:58:53.271932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67508 ] 00:12:32.748 [2024-12-08 20:58:53.439051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.748 [2024-12-08 20:58:53.583252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.651  [2024-12-08T20:58:56.630Z] Copying: 237/1024 [MB] (237 MBps) [2024-12-08T20:58:57.567Z] Copying: 473/1024 [MB] (236 MBps) [2024-12-08T20:58:58.941Z] Copying: 709/1024 [MB] (235 MBps) [2024-12-08T20:58:58.941Z] Copying: 946/1024 [MB] (236 MBps) [2024-12-08T20:59:02.245Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:12:41.202 00:12:41.202 20:59:01 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:41.202 20:59:01 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:41.202 20:59:01 -- dd/common.sh@31 -- # xtrace_disable 00:12:41.202 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:12:41.202 { 00:12:41.202 "subsystems": [ 00:12:41.202 { 00:12:41.202 "subsystem": "bdev", 00:12:41.202 "config": [ 00:12:41.202 { 00:12:41.202 "params": { 00:12:41.202 "block_size": 512, 00:12:41.202 "num_blocks": 2097152, 00:12:41.202 "name": "malloc0" 00:12:41.202 }, 00:12:41.202 "method": "bdev_malloc_create" 00:12:41.202 }, 00:12:41.202 { 00:12:41.202 "params": { 00:12:41.202 "io_mechanism": "io_uring", 00:12:41.202 "filename": "/dev/nullb0", 00:12:41.202 "name": "null0" 00:12:41.202 }, 00:12:41.202 "method": "bdev_xnvme_create" 00:12:41.202 }, 00:12:41.202 { 00:12:41.202 "method": "bdev_wait_for_examine" 00:12:41.202 } 00:12:41.202 ] 00:12:41.202 } 00:12:41.202 ] 00:12:41.202 } 00:12:41.202 [2024-12-08 20:59:01.653208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:41.202 [2024-12-08 20:59:01.653367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67601 ] 00:12:41.202 [2024-12-08 20:59:01.822592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.202 [2024-12-08 20:59:01.965616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.106  [2024-12-08T20:59:05.085Z] Copying: 246/1024 [MB] (246 MBps) [2024-12-08T20:59:06.019Z] Copying: 491/1024 [MB] (245 MBps) [2024-12-08T20:59:06.955Z] Copying: 736/1024 [MB] (244 MBps) [2024-12-08T20:59:07.214Z] Copying: 980/1024 [MB] (244 MBps) [2024-12-08T20:59:10.496Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:12:49.453 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:49.453 20:59:09 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:49.453 00:12:49.453 real 0m33.792s 00:12:49.453 user 0m29.500s 00:12:49.453 sys 0m3.777s 00:12:49.453 20:59:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:49.453 20:59:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.453 ************************************ 00:12:49.453 END TEST xnvme_to_malloc_dd_copy 00:12:49.453 ************************************ 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:49.453 20:59:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:49.453 20:59:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:49.453 20:59:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.453 ************************************ 00:12:49.453 START TEST xnvme_bdevperf 00:12:49.453 ************************************ 00:12:49.453 20:59:09 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:49.453 20:59:09 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:49.453 20:59:09 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:49.453 20:59:09 -- dd/common.sh@191 -- # return 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@60 -- # local io 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:49.453 20:59:09 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:49.454 20:59:09 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:49.454 20:59:09 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:49.454 20:59:09 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:49.454 20:59:09 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:49.454 20:59:09 -- dd/common.sh@31 -- # xtrace_disable 00:12:49.454 20:59:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.454 { 00:12:49.454 "subsystems": [ 00:12:49.454 { 00:12:49.454 "subsystem": "bdev", 00:12:49.454 "config": [ 00:12:49.454 { 00:12:49.454 "params": { 00:12:49.454 "io_mechanism": "libaio", 00:12:49.454 "filename": "/dev/nullb0", 00:12:49.454 "name": "null0" 00:12:49.454 }, 00:12:49.454 "method": "bdev_xnvme_create" 00:12:49.454 }, 00:12:49.454 { 00:12:49.454 "method": "bdev_wait_for_examine" 00:12:49.454 } 00:12:49.454 ] 00:12:49.454 } 00:12:49.454 ] 00:12:49.454 } 00:12:49.454 [2024-12-08 20:59:09.985435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:49.454 [2024-12-08 20:59:09.985646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67722 ] 00:12:49.454 [2024-12-08 20:59:10.155528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.454 [2024-12-08 20:59:10.298597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.712 Running I/O for 5 seconds... 00:12:54.980 00:12:54.980 Latency(us) 00:12:54.980 [2024-12-08T20:59:16.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.980 [2024-12-08T20:59:16.023Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:54.980 null0 : 5.00 161794.69 632.01 0.00 0.00 393.14 124.74 673.98 00:12:54.980 [2024-12-08T20:59:16.023Z] =================================================================================================================== 00:12:54.980 [2024-12-08T20:59:16.023Z] Total : 161794.69 632.01 0.00 0.00 393.14 124.74 673.98 00:12:55.546 20:59:16 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:55.546 20:59:16 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:55.546 20:59:16 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:55.546 20:59:16 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:55.546 20:59:16 -- dd/common.sh@31 -- # xtrace_disable 00:12:55.546 20:59:16 -- common/autotest_common.sh@10 -- # set +x 00:12:55.546 { 00:12:55.546 "subsystems": [ 00:12:55.546 { 00:12:55.546 "subsystem": "bdev", 00:12:55.546 "config": [ 00:12:55.546 { 00:12:55.546 "params": { 00:12:55.546 "io_mechanism": "io_uring", 00:12:55.546 "filename": "/dev/nullb0", 00:12:55.546 "name": "null0" 00:12:55.546 }, 00:12:55.546 "method": "bdev_xnvme_create" 00:12:55.546 }, 00:12:55.546 { 00:12:55.546 "method": "bdev_wait_for_examine" 00:12:55.546 } 00:12:55.546 ] 00:12:55.546 } 00:12:55.546 ] 00:12:55.546 } 00:12:55.546 [2024-12-08 20:59:16.495337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:55.546 [2024-12-08 20:59:16.495492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67796 ] 00:12:55.804 [2024-12-08 20:59:16.662128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.804 [2024-12-08 20:59:16.805215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.063 Running I/O for 5 seconds... 00:13:01.333 00:13:01.333 Latency(us) 00:13:01.333 [2024-12-08T20:59:22.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.333 [2024-12-08T20:59:22.376Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:01.333 null0 : 5.00 204425.98 798.54 0.00 0.00 310.70 184.32 696.32 00:13:01.333 [2024-12-08T20:59:22.376Z] =================================================================================================================== 00:13:01.333 [2024-12-08T20:59:22.376Z] Total : 204425.98 798.54 0.00 0.00 310.70 184.32 696.32 00:13:01.900 20:59:22 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:01.900 20:59:22 -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:01.900 00:13:01.900 real 0m13.038s 00:13:01.900 user 0m10.152s 00:13:01.900 sys 0m2.683s 00:13:01.900 20:59:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.900 20:59:22 -- common/autotest_common.sh@10 -- # set +x 00:13:01.900 ************************************ 00:13:01.900 END TEST xnvme_bdevperf 00:13:01.900 ************************************ 00:13:02.159 00:13:02.159 real 0m47.107s 00:13:02.159 user 0m39.789s 00:13:02.159 sys 0m6.597s 00:13:02.159 20:59:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:02.159 ************************************ 00:13:02.159 END TEST nvme_xnvme 00:13:02.159 20:59:22 -- common/autotest_common.sh@10 -- # set +x 00:13:02.159 ************************************ 00:13:02.159 20:59:22 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:02.159 20:59:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:02.159 20:59:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.159 20:59:22 -- common/autotest_common.sh@10 -- # set +x 00:13:02.159 ************************************ 00:13:02.159 START TEST blockdev_xnvme 00:13:02.159 ************************************ 00:13:02.159 20:59:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:02.159 * Looking for test storage... 00:13:02.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:02.159 20:59:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:02.159 20:59:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:02.159 20:59:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:02.159 20:59:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:02.159 20:59:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:02.159 20:59:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:02.159 20:59:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:02.159 20:59:23 -- scripts/common.sh@335 -- # IFS=.-: 00:13:02.159 20:59:23 -- scripts/common.sh@335 -- # read -ra ver1 00:13:02.159 20:59:23 -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.159 20:59:23 -- scripts/common.sh@336 -- # read -ra ver2 00:13:02.159 20:59:23 -- scripts/common.sh@337 -- # local 'op=<' 00:13:02.159 20:59:23 -- scripts/common.sh@339 -- # ver1_l=2 00:13:02.159 20:59:23 -- scripts/common.sh@340 -- # ver2_l=1 00:13:02.159 20:59:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:02.159 20:59:23 -- scripts/common.sh@343 -- # case "$op" in 00:13:02.159 20:59:23 -- scripts/common.sh@344 -- # : 1 00:13:02.159 20:59:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:02.159 20:59:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.159 20:59:23 -- scripts/common.sh@364 -- # decimal 1 00:13:02.159 20:59:23 -- scripts/common.sh@352 -- # local d=1 00:13:02.159 20:59:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.159 20:59:23 -- scripts/common.sh@354 -- # echo 1 00:13:02.159 20:59:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:02.159 20:59:23 -- scripts/common.sh@365 -- # decimal 2 00:13:02.159 20:59:23 -- scripts/common.sh@352 -- # local d=2 00:13:02.159 20:59:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.159 20:59:23 -- scripts/common.sh@354 -- # echo 2 00:13:02.159 20:59:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:02.159 20:59:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:02.159 20:59:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:02.159 20:59:23 -- scripts/common.sh@367 -- # return 0 00:13:02.159 20:59:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.159 20:59:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:02.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.159 --rc genhtml_branch_coverage=1 00:13:02.159 --rc genhtml_function_coverage=1 00:13:02.159 --rc genhtml_legend=1 00:13:02.159 --rc geninfo_all_blocks=1 00:13:02.159 --rc geninfo_unexecuted_blocks=1 00:13:02.159 00:13:02.159 ' 00:13:02.159 20:59:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:02.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.159 --rc genhtml_branch_coverage=1 00:13:02.159 --rc genhtml_function_coverage=1 00:13:02.159 --rc genhtml_legend=1 00:13:02.159 --rc geninfo_all_blocks=1 00:13:02.159 --rc geninfo_unexecuted_blocks=1 00:13:02.159 00:13:02.159 ' 00:13:02.159 20:59:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:02.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.159 --rc genhtml_branch_coverage=1 00:13:02.159 --rc genhtml_function_coverage=1 00:13:02.159 --rc genhtml_legend=1 00:13:02.159 --rc geninfo_all_blocks=1 00:13:02.159 --rc geninfo_unexecuted_blocks=1 00:13:02.159 00:13:02.159 ' 00:13:02.159 20:59:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:02.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.159 --rc genhtml_branch_coverage=1 00:13:02.159 --rc genhtml_function_coverage=1 00:13:02.159 --rc genhtml_legend=1 00:13:02.159 --rc geninfo_all_blocks=1 00:13:02.159 --rc geninfo_unexecuted_blocks=1 00:13:02.159 00:13:02.159 ' 00:13:02.159 20:59:23 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:02.159 20:59:23 -- bdev/nbd_common.sh@6 -- # set -e 00:13:02.159 20:59:23 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:02.159 20:59:23 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:02.159 20:59:23 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:02.159 20:59:23 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:02.159 20:59:23 -- bdev/blockdev.sh@18 -- # : 00:13:02.159 20:59:23 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:13:02.159 20:59:23 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:13:02.159 20:59:23 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:13:02.159 20:59:23 -- bdev/blockdev.sh@672 -- # uname -s 00:13:02.159 20:59:23 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:13:02.159 20:59:23 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:13:02.159 20:59:23 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:13:02.159 20:59:23 -- bdev/blockdev.sh@681 -- # crypto_device= 00:13:02.159 20:59:23 -- bdev/blockdev.sh@682 -- # dek= 00:13:02.159 20:59:23 -- bdev/blockdev.sh@683 -- # env_ctx= 00:13:02.159 20:59:23 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:13:02.159 20:59:23 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:13:02.159 20:59:23 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:13:02.159 20:59:23 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:13:02.159 20:59:23 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:13:02.159 20:59:23 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67943 00:13:02.159 20:59:23 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:02.159 20:59:23 -- bdev/blockdev.sh@47 -- # waitforlisten 67943 00:13:02.159 20:59:23 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:02.159 20:59:23 -- common/autotest_common.sh@829 -- # '[' -z 67943 ']' 00:13:02.159 20:59:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.159 20:59:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.159 20:59:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.159 20:59:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.159 20:59:23 -- common/autotest_common.sh@10 -- # set +x 00:13:02.418 [2024-12-08 20:59:23.274655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:02.418 [2024-12-08 20:59:23.274834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67943 ] 00:13:02.418 [2024-12-08 20:59:23.442574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.675 [2024-12-08 20:59:23.589868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:02.675 [2024-12-08 20:59:23.590119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.242 20:59:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.242 20:59:24 -- common/autotest_common.sh@862 -- # return 0 00:13:03.242 20:59:24 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:13:03.242 20:59:24 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:13:03.242 20:59:24 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:13:03.242 20:59:24 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:13:03.242 20:59:24 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:03.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:03.806 Waiting for block devices as requested 00:13:03.806 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.062 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.062 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.062 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:09.330 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:09.330 20:59:30 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:13:09.330 20:59:30 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:13:09.330 20:59:30 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:13:09.330 20:59:30 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:13:09.330 20:59:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:13:09.330 20:59:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:13:09.330 20:59:30 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:13:09.330 20:59:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:13:09.330 20:59:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:13:09.330 20:59:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:13:09.330 20:59:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:13:09.330 20:59:30 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:13:09.330 20:59:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:09.330 20:59:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:13:09.330 20:59:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:13:09.330 20:59:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:13:09.330 20:59:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:13:09.330 20:59:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:09.330 20:59:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:13:09.330 20:59:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:13:09.330 20:59:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:13:09.330 20:59:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:13:09.330 20:59:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:13:09.330 20:59:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:13:09.330 20:59:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:13:09.330 20:59:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:13:09.331 20:59:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:13:09.331 20:59:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:13:09.331 20:59:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:13:09.331 20:59:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:13:09.331 20:59:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:13:09.331 20:59:30 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:13:09.331 20:59:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:09.331 20:59:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:13:09.331 20:59:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:13:09.331 20:59:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:13:09.331 20:59:30 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:13:09.331 20:59:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:09.331 20:59:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:09.331 20:59:30 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:09.331 20:59:30 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:09.331 20:59:30 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:09.331 20:59:30 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:09.331 20:59:30 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:09.331 20:59:30 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:13:09.331 20:59:30 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:13:09.331 20:59:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.331 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:13:09.331 20:59:30 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:09.331 nvme0n1 00:13:09.331 nvme1n1 00:13:09.331 nvme1n2 00:13:09.331 nvme1n3 00:13:09.331 nvme2n1 00:13:09.331 nvme3n1 00:13:09.331 20:59:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:13:09.331 20:59:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.331 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:13:09.331 20:59:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@738 -- # cat 00:13:09.331 20:59:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:13:09.331 20:59:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.331 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:13:09.331 20:59:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:13:09.331 20:59:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.331 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:13:09.331 20:59:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:09.331 20:59:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.331 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:13:09.331 20:59:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:13:09.331 20:59:30 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:13:09.331 20:59:30 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:13:09.331 20:59:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.331 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:13:09.331 20:59:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.331 20:59:30 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:13:09.331 20:59:30 -- bdev/blockdev.sh@747 -- # jq -r .name 00:13:09.332 20:59:30 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1ad05498-2ecc-4742-a41b-61798788900b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1ad05498-2ecc-4742-a41b-61798788900b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "82542689-8280-4bbf-9a9c-9408a1045fa1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "82542689-8280-4bbf-9a9c-9408a1045fa1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "a8997a4e-575c-4608-8560-5dffd205fbde"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8997a4e-575c-4608-8560-5dffd205fbde",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "beb259f4-d630-4823-a64c-d763309a5ac2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "beb259f4-d630-4823-a64c-d763309a5ac2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a0d41d4c-6092-4826-9386-0787692d9f2c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a0d41d4c-6092-4826-9386-0787692d9f2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5323c12f-b5b4-4a01-af72-8f3c710dd116"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5323c12f-b5b4-4a01-af72-8f3c710dd116",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:09.657 20:59:30 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:13:09.657 20:59:30 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:13:09.657 20:59:30 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:13:09.657 20:59:30 -- bdev/blockdev.sh@752 -- # killprocess 67943 00:13:09.658 20:59:30 -- common/autotest_common.sh@936 -- # '[' -z 67943 ']' 00:13:09.658 20:59:30 -- common/autotest_common.sh@940 -- # kill -0 67943 00:13:09.658 20:59:30 -- common/autotest_common.sh@941 -- # uname 00:13:09.658 20:59:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:09.658 20:59:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67943 00:13:09.658 killing process with pid 67943 00:13:09.658 20:59:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:09.658 20:59:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:09.658 20:59:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67943' 00:13:09.658 20:59:30 -- common/autotest_common.sh@955 -- # kill 67943 00:13:09.658 20:59:30 -- common/autotest_common.sh@960 -- # wait 67943 00:13:11.066 20:59:32 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:11.066 20:59:32 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:11.066 20:59:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:13:11.066 20:59:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:11.066 20:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:11.067 ************************************ 00:13:11.067 START TEST bdev_hello_world 00:13:11.067 ************************************ 00:13:11.067 20:59:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:11.326 [2024-12-08 20:59:32.151422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:11.326 [2024-12-08 20:59:32.151591] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68332 ] 00:13:11.326 [2024-12-08 20:59:32.320135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.585 [2024-12-08 20:59:32.463804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.844 [2024-12-08 20:59:32.786373] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:11.844 [2024-12-08 20:59:32.786423] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:11.844 [2024-12-08 20:59:32.786474] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:11.844 [2024-12-08 20:59:32.788383] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:11.844 [2024-12-08 20:59:32.788846] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:11.844 [2024-12-08 20:59:32.788907] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:11.844 [2024-12-08 20:59:32.789172] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:11.844 00:13:11.844 [2024-12-08 20:59:32.789200] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:12.781 00:13:12.781 real 0m1.580s 00:13:12.781 user 0m1.291s 00:13:12.781 sys 0m0.177s 00:13:12.781 20:59:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:12.781 ************************************ 00:13:12.781 END TEST bdev_hello_world 00:13:12.781 ************************************ 00:13:12.781 20:59:33 -- common/autotest_common.sh@10 -- # set +x 00:13:12.781 20:59:33 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:13:12.781 20:59:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:12.781 20:59:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:12.781 20:59:33 -- common/autotest_common.sh@10 -- # set +x 00:13:12.781 ************************************ 00:13:12.781 START TEST bdev_bounds 00:13:12.781 ************************************ 00:13:12.781 Process bdevio pid: 68363 00:13:12.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.781 20:59:33 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:13:12.781 20:59:33 -- bdev/blockdev.sh@288 -- # bdevio_pid=68363 00:13:12.781 20:59:33 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:12.781 20:59:33 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:12.781 20:59:33 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 68363' 00:13:12.781 20:59:33 -- bdev/blockdev.sh@291 -- # waitforlisten 68363 00:13:12.781 20:59:33 -- common/autotest_common.sh@829 -- # '[' -z 68363 ']' 00:13:12.781 20:59:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.781 20:59:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.781 20:59:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.781 20:59:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.781 20:59:33 -- common/autotest_common.sh@10 -- # set +x 00:13:12.781 [2024-12-08 20:59:33.757501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:12.781 [2024-12-08 20:59:33.757832] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68363 ] 00:13:13.040 [2024-12-08 20:59:33.911490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.040 [2024-12-08 20:59:34.067131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.040 [2024-12-08 20:59:34.067289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.040 [2024-12-08 20:59:34.067308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.976 20:59:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.976 20:59:34 -- common/autotest_common.sh@862 -- # return 0 00:13:13.976 20:59:34 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:13.976 I/O targets: 00:13:13.976 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:13.976 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:13.976 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:13.976 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:13.976 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:13.976 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:13.976 00:13:13.976 00:13:13.976 CUnit - A unit testing framework for C - Version 2.1-3 00:13:13.976 http://cunit.sourceforge.net/ 00:13:13.976 00:13:13.976 00:13:13.976 Suite: bdevio tests on: nvme3n1 00:13:13.976 Test: blockdev write read block ...passed 00:13:13.976 Test: blockdev write zeroes read block ...passed 00:13:13.976 Test: blockdev write zeroes read no split ...passed 00:13:13.976 Test: blockdev write zeroes read split ...passed 00:13:13.977 Test: blockdev write zeroes read split partial ...passed 00:13:13.977 Test: blockdev reset ...passed 00:13:13.977 Test: blockdev write read 8 blocks ...passed 00:13:13.977 Test: blockdev write read size > 128k ...passed 00:13:13.977 Test: blockdev write read invalid size ...passed 00:13:13.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:13.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:13.977 Test: blockdev write read max offset ...passed 00:13:13.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:13.977 Test: blockdev writev readv 8 blocks ...passed 00:13:13.977 Test: blockdev writev readv 30 x 1block ...passed 00:13:13.977 Test: blockdev writev readv block ...passed 00:13:13.977 Test: blockdev writev readv size > 128k ...passed 00:13:13.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:13.977 Test: blockdev comparev and writev ...passed 00:13:13.977 Test: blockdev nvme passthru rw ...passed 00:13:13.977 Test: blockdev nvme passthru vendor specific ...passed 00:13:13.977 Test: blockdev nvme admin passthru ...passed 00:13:13.977 Test: blockdev copy ...passed 00:13:13.977 Suite: bdevio tests on: nvme2n1 00:13:13.977 Test: blockdev write read block ...passed 00:13:13.977 Test: blockdev write zeroes read block ...passed 00:13:13.977 Test: blockdev write zeroes read no split ...passed 00:13:13.977 Test: blockdev write zeroes read split ...passed 00:13:13.977 Test: blockdev write zeroes read split partial ...passed 00:13:13.977 Test: blockdev reset ...passed 00:13:13.977 Test: blockdev write read 8 blocks ...passed 00:13:13.977 Test: blockdev write read size > 128k ...passed 00:13:13.977 Test: blockdev write read invalid size ...passed 00:13:13.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:13.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:13.977 Test: blockdev write read max offset ...passed 00:13:13.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:13.977 Test: blockdev writev readv 8 blocks ...passed 00:13:13.977 Test: blockdev writev readv 30 x 1block ...passed 00:13:13.977 Test: blockdev writev readv block ...passed 00:13:13.977 Test: blockdev writev readv size > 128k ...passed 00:13:13.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:13.977 Test: blockdev comparev and writev ...passed 00:13:13.977 Test: blockdev nvme passthru rw ...passed 00:13:13.977 Test: blockdev nvme passthru vendor specific ...passed 00:13:13.977 Test: blockdev nvme admin passthru ...passed 00:13:13.977 Test: blockdev copy ...passed 00:13:13.977 Suite: bdevio tests on: nvme1n3 00:13:13.977 Test: blockdev write read block ...passed 00:13:13.977 Test: blockdev write zeroes read block ...passed 00:13:13.977 Test: blockdev write zeroes read no split ...passed 00:13:13.977 Test: blockdev write zeroes read split ...passed 00:13:13.977 Test: blockdev write zeroes read split partial ...passed 00:13:13.977 Test: blockdev reset ...passed 00:13:13.977 Test: blockdev write read 8 blocks ...passed 00:13:13.977 Test: blockdev write read size > 128k ...passed 00:13:13.977 Test: blockdev write read invalid size ...passed 00:13:13.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:13.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:13.977 Test: blockdev write read max offset ...passed 00:13:13.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:13.977 Test: blockdev writev readv 8 blocks ...passed 00:13:13.977 Test: blockdev writev readv 30 x 1block ...passed 00:13:13.977 Test: blockdev writev readv block ...passed 00:13:13.977 Test: blockdev writev readv size > 128k ...passed 00:13:13.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:13.977 Test: blockdev comparev and writev ...passed 00:13:13.977 Test: blockdev nvme passthru rw ...passed 00:13:13.977 Test: blockdev nvme passthru vendor specific ...passed 00:13:13.977 Test: blockdev nvme admin passthru ...passed 00:13:13.977 Test: blockdev copy ...passed 00:13:13.977 Suite: bdevio tests on: nvme1n2 00:13:13.977 Test: blockdev write read block ...passed 00:13:13.977 Test: blockdev write zeroes read block ...passed 00:13:13.977 Test: blockdev write zeroes read no split ...passed 00:13:13.977 Test: blockdev write zeroes read split ...passed 00:13:14.235 Test: blockdev write zeroes read split partial ...passed 00:13:14.235 Test: blockdev reset ...passed 00:13:14.235 Test: blockdev write read 8 blocks ...passed 00:13:14.235 Test: blockdev write read size > 128k ...passed 00:13:14.235 Test: blockdev write read invalid size ...passed 00:13:14.235 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:14.235 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:14.235 Test: blockdev write read max offset ...passed 00:13:14.235 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:14.235 Test: blockdev writev readv 8 blocks ...passed 00:13:14.235 Test: blockdev writev readv 30 x 1block ...passed 00:13:14.235 Test: blockdev writev readv block ...passed 00:13:14.235 Test: blockdev writev readv size > 128k ...passed 00:13:14.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:14.235 Test: blockdev comparev and writev ...passed 00:13:14.235 Test: blockdev nvme passthru rw ...passed 00:13:14.235 Test: blockdev nvme passthru vendor specific ...passed 00:13:14.235 Test: blockdev nvme admin passthru ...passed 00:13:14.235 Test: blockdev copy ...passed 00:13:14.235 Suite: bdevio tests on: nvme1n1 00:13:14.235 Test: blockdev write read block ...passed 00:13:14.235 Test: blockdev write zeroes read block ...passed 00:13:14.235 Test: blockdev write zeroes read no split ...passed 00:13:14.235 Test: blockdev write zeroes read split ...passed 00:13:14.235 Test: blockdev write zeroes read split partial ...passed 00:13:14.235 Test: blockdev reset ...passed 00:13:14.235 Test: blockdev write read 8 blocks ...passed 00:13:14.235 Test: blockdev write read size > 128k ...passed 00:13:14.235 Test: blockdev write read invalid size ...passed 00:13:14.235 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:14.235 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:14.235 Test: blockdev write read max offset ...passed 00:13:14.235 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:14.235 Test: blockdev writev readv 8 blocks ...passed 00:13:14.235 Test: blockdev writev readv 30 x 1block ...passed 00:13:14.235 Test: blockdev writev readv block ...passed 00:13:14.235 Test: blockdev writev readv size > 128k ...passed 00:13:14.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:14.235 Test: blockdev comparev and writev ...passed 00:13:14.235 Test: blockdev nvme passthru rw ...passed 00:13:14.235 Test: blockdev nvme passthru vendor specific ...passed 00:13:14.235 Test: blockdev nvme admin passthru ...passed 00:13:14.235 Test: blockdev copy ...passed 00:13:14.235 Suite: bdevio tests on: nvme0n1 00:13:14.235 Test: blockdev write read block ...passed 00:13:14.235 Test: blockdev write zeroes read block ...passed 00:13:14.235 Test: blockdev write zeroes read no split ...passed 00:13:14.235 Test: blockdev write zeroes read split ...passed 00:13:14.235 Test: blockdev write zeroes read split partial ...passed 00:13:14.235 Test: blockdev reset ...passed 00:13:14.235 Test: blockdev write read 8 blocks ...passed 00:13:14.235 Test: blockdev write read size > 128k ...passed 00:13:14.235 Test: blockdev write read invalid size ...passed 00:13:14.235 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:14.235 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:14.235 Test: blockdev write read max offset ...passed 00:13:14.235 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:14.235 Test: blockdev writev readv 8 blocks ...passed 00:13:14.235 Test: blockdev writev readv 30 x 1block ...passed 00:13:14.235 Test: blockdev writev readv block ...passed 00:13:14.235 Test: blockdev writev readv size > 128k ...passed 00:13:14.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:14.235 Test: blockdev comparev and writev ...passed 00:13:14.235 Test: blockdev nvme passthru rw ...passed 00:13:14.235 Test: blockdev nvme passthru vendor specific ...passed 00:13:14.235 Test: blockdev nvme admin passthru ...passed 00:13:14.235 Test: blockdev copy ...passed 00:13:14.235 00:13:14.235 Run Summary: Type Total Ran Passed Failed Inactive 00:13:14.235 suites 6 6 n/a 0 0 00:13:14.235 tests 138 138 138 0 0 00:13:14.235 asserts 780 780 780 0 n/a 00:13:14.235 00:13:14.235 Elapsed time = 1.094 seconds 00:13:14.235 0 00:13:14.235 20:59:35 -- bdev/blockdev.sh@293 -- # killprocess 68363 00:13:14.235 20:59:35 -- common/autotest_common.sh@936 -- # '[' -z 68363 ']' 00:13:14.235 20:59:35 -- common/autotest_common.sh@940 -- # kill -0 68363 00:13:14.235 20:59:35 -- common/autotest_common.sh@941 -- # uname 00:13:14.235 20:59:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:14.235 20:59:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68363 00:13:14.235 killing process with pid 68363 00:13:14.235 20:59:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:14.235 20:59:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:14.235 20:59:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68363' 00:13:14.235 20:59:35 -- common/autotest_common.sh@955 -- # kill 68363 00:13:14.235 20:59:35 -- common/autotest_common.sh@960 -- # wait 68363 00:13:15.176 20:59:36 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:13:15.176 00:13:15.176 real 0m2.407s 00:13:15.176 user 0m5.888s 00:13:15.176 sys 0m0.295s 00:13:15.176 ************************************ 00:13:15.176 END TEST bdev_bounds 00:13:15.176 ************************************ 00:13:15.176 20:59:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:15.176 20:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:15.176 20:59:36 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:13:15.176 20:59:36 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:15.176 20:59:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.176 20:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:15.176 ************************************ 00:13:15.176 START TEST bdev_nbd 00:13:15.176 ************************************ 00:13:15.176 20:59:36 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:13:15.176 20:59:36 -- bdev/blockdev.sh@298 -- # uname -s 00:13:15.176 20:59:36 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:13:15.176 20:59:36 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.176 20:59:36 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:15.176 20:59:36 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:15.176 20:59:36 -- bdev/blockdev.sh@302 -- # local bdev_all 00:13:15.176 20:59:36 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:13:15.176 20:59:36 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:13:15.176 20:59:36 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:15.176 20:59:36 -- bdev/blockdev.sh@309 -- # local nbd_all 00:13:15.176 20:59:36 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:13:15.176 20:59:36 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:15.176 20:59:36 -- bdev/blockdev.sh@312 -- # local nbd_list 00:13:15.176 20:59:36 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:15.176 20:59:36 -- bdev/blockdev.sh@313 -- # local bdev_list 00:13:15.176 20:59:36 -- bdev/blockdev.sh@316 -- # nbd_pid=68425 00:13:15.176 20:59:36 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:15.176 20:59:36 -- bdev/blockdev.sh@318 -- # waitforlisten 68425 /var/tmp/spdk-nbd.sock 00:13:15.176 20:59:36 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:15.176 20:59:36 -- common/autotest_common.sh@829 -- # '[' -z 68425 ']' 00:13:15.176 20:59:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:15.176 20:59:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:15.176 20:59:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:15.176 20:59:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.176 20:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:15.435 [2024-12-08 20:59:36.248481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:15.435 [2024-12-08 20:59:36.248675] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.435 [2024-12-08 20:59:36.419362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.694 [2024-12-08 20:59:36.561907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.261 20:59:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.261 20:59:37 -- common/autotest_common.sh@862 -- # return 0 00:13:16.261 20:59:37 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@24 -- # local i 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:16.261 20:59:37 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:16.519 20:59:37 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:16.519 20:59:37 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:16.519 20:59:37 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:16.519 20:59:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:16.519 20:59:37 -- common/autotest_common.sh@867 -- # local i 00:13:16.519 20:59:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:16.519 20:59:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:16.519 20:59:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:16.519 20:59:37 -- common/autotest_common.sh@871 -- # break 00:13:16.519 20:59:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:16.519 20:59:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:16.519 20:59:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.519 1+0 records in 00:13:16.519 1+0 records out 00:13:16.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471145 s, 8.7 MB/s 00:13:16.519 20:59:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.519 20:59:37 -- common/autotest_common.sh@884 -- # size=4096 00:13:16.519 20:59:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.519 20:59:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:16.519 20:59:37 -- common/autotest_common.sh@887 -- # return 0 00:13:16.519 20:59:37 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:16.519 20:59:37 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:16.519 20:59:37 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:16.778 20:59:37 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:16.778 20:59:37 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:16.778 20:59:37 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:16.778 20:59:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:16.778 20:59:37 -- common/autotest_common.sh@867 -- # local i 00:13:16.778 20:59:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:16.778 20:59:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:16.778 20:59:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:16.778 20:59:37 -- common/autotest_common.sh@871 -- # break 00:13:16.778 20:59:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:16.778 20:59:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:16.778 20:59:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.778 1+0 records in 00:13:16.778 1+0 records out 00:13:16.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544915 s, 7.5 MB/s 00:13:16.778 20:59:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.778 20:59:37 -- common/autotest_common.sh@884 -- # size=4096 00:13:16.778 20:59:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.778 20:59:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:16.778 20:59:37 -- common/autotest_common.sh@887 -- # return 0 00:13:16.778 20:59:37 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:16.778 20:59:37 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:16.778 20:59:37 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:13:17.036 20:59:37 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:17.036 20:59:37 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:17.036 20:59:37 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:17.036 20:59:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:13:17.036 20:59:37 -- common/autotest_common.sh@867 -- # local i 00:13:17.036 20:59:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:17.036 20:59:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:17.036 20:59:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:13:17.036 20:59:37 -- common/autotest_common.sh@871 -- # break 00:13:17.036 20:59:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:17.036 20:59:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:17.036 20:59:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.036 1+0 records in 00:13:17.036 1+0 records out 00:13:17.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000976249 s, 4.2 MB/s 00:13:17.036 20:59:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.036 20:59:37 -- common/autotest_common.sh@884 -- # size=4096 00:13:17.036 20:59:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.036 20:59:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:17.036 20:59:37 -- common/autotest_common.sh@887 -- # return 0 00:13:17.036 20:59:37 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.036 20:59:37 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.036 20:59:37 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:13:17.293 20:59:38 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:17.293 20:59:38 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:17.293 20:59:38 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:17.293 20:59:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:13:17.293 20:59:38 -- common/autotest_common.sh@867 -- # local i 00:13:17.293 20:59:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:17.293 20:59:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:17.293 20:59:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:13:17.293 20:59:38 -- common/autotest_common.sh@871 -- # break 00:13:17.293 20:59:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:17.293 20:59:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:17.293 20:59:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.293 1+0 records in 00:13:17.293 1+0 records out 00:13:17.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544595 s, 7.5 MB/s 00:13:17.293 20:59:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.293 20:59:38 -- common/autotest_common.sh@884 -- # size=4096 00:13:17.293 20:59:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.293 20:59:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:17.293 20:59:38 -- common/autotest_common.sh@887 -- # return 0 00:13:17.293 20:59:38 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.293 20:59:38 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.293 20:59:38 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:17.551 20:59:38 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:17.551 20:59:38 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:17.551 20:59:38 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:17.551 20:59:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:13:17.551 20:59:38 -- common/autotest_common.sh@867 -- # local i 00:13:17.551 20:59:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:17.551 20:59:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:17.551 20:59:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:13:17.551 20:59:38 -- common/autotest_common.sh@871 -- # break 00:13:17.551 20:59:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:17.551 20:59:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:17.551 20:59:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.551 1+0 records in 00:13:17.551 1+0 records out 00:13:17.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689197 s, 5.9 MB/s 00:13:17.551 20:59:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.551 20:59:38 -- common/autotest_common.sh@884 -- # size=4096 00:13:17.551 20:59:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.551 20:59:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:17.551 20:59:38 -- common/autotest_common.sh@887 -- # return 0 00:13:17.551 20:59:38 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.551 20:59:38 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.551 20:59:38 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:17.809 20:59:38 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:17.809 20:59:38 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:17.809 20:59:38 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:17.809 20:59:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:13:17.809 20:59:38 -- common/autotest_common.sh@867 -- # local i 00:13:17.809 20:59:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:17.809 20:59:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:17.809 20:59:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:13:17.809 20:59:38 -- common/autotest_common.sh@871 -- # break 00:13:17.809 20:59:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:17.809 20:59:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:17.809 20:59:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.809 1+0 records in 00:13:17.809 1+0 records out 00:13:17.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817926 s, 5.0 MB/s 00:13:17.809 20:59:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.809 20:59:38 -- common/autotest_common.sh@884 -- # size=4096 00:13:17.809 20:59:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.809 20:59:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:17.809 20:59:38 -- common/autotest_common.sh@887 -- # return 0 00:13:17.809 20:59:38 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.809 20:59:38 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.809 20:59:38 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:18.067 20:59:39 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:18.067 { 00:13:18.067 "nbd_device": "/dev/nbd0", 00:13:18.067 "bdev_name": "nvme0n1" 00:13:18.067 }, 00:13:18.067 { 00:13:18.067 "nbd_device": "/dev/nbd1", 00:13:18.067 "bdev_name": "nvme1n1" 00:13:18.067 }, 00:13:18.067 { 00:13:18.067 "nbd_device": "/dev/nbd2", 00:13:18.067 "bdev_name": "nvme1n2" 00:13:18.067 }, 00:13:18.067 { 00:13:18.067 "nbd_device": "/dev/nbd3", 00:13:18.067 "bdev_name": "nvme1n3" 00:13:18.067 }, 00:13:18.067 { 00:13:18.067 "nbd_device": "/dev/nbd4", 00:13:18.067 "bdev_name": "nvme2n1" 00:13:18.067 }, 00:13:18.067 { 00:13:18.067 "nbd_device": "/dev/nbd5", 00:13:18.067 "bdev_name": "nvme3n1" 00:13:18.067 } 00:13:18.067 ]' 00:13:18.067 20:59:39 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:18.067 20:59:39 -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:18.067 { 00:13:18.067 "nbd_device": "/dev/nbd0", 00:13:18.067 "bdev_name": "nvme0n1" 00:13:18.067 }, 00:13:18.067 { 00:13:18.067 "nbd_device": "/dev/nbd1", 00:13:18.067 "bdev_name": "nvme1n1" 00:13:18.067 }, 00:13:18.067 { 00:13:18.068 "nbd_device": "/dev/nbd2", 00:13:18.068 "bdev_name": "nvme1n2" 00:13:18.068 }, 00:13:18.068 { 00:13:18.068 "nbd_device": "/dev/nbd3", 00:13:18.068 "bdev_name": "nvme1n3" 00:13:18.068 }, 00:13:18.068 { 00:13:18.068 "nbd_device": "/dev/nbd4", 00:13:18.068 "bdev_name": "nvme2n1" 00:13:18.068 }, 00:13:18.068 { 00:13:18.068 "nbd_device": "/dev/nbd5", 00:13:18.068 "bdev_name": "nvme3n1" 00:13:18.068 } 00:13:18.068 ]' 00:13:18.068 20:59:39 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:18.068 20:59:39 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:18.068 20:59:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:18.068 20:59:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:18.068 20:59:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.068 20:59:39 -- bdev/nbd_common.sh@51 -- # local i 00:13:18.068 20:59:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.068 20:59:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:18.358 20:59:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.358 20:59:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.358 20:59:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.358 20:59:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.358 20:59:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.358 20:59:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.358 20:59:39 -- bdev/nbd_common.sh@41 -- # break 00:13:18.358 20:59:39 -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.358 20:59:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.358 20:59:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:18.614 20:59:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:18.614 20:59:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:18.614 20:59:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:18.614 20:59:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.614 20:59:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.614 20:59:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:18.614 20:59:39 -- bdev/nbd_common.sh@41 -- # break 00:13:18.614 20:59:39 -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.614 20:59:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.614 20:59:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:18.871 20:59:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:18.871 20:59:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:18.871 20:59:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:18.871 20:59:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.871 20:59:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.871 20:59:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:18.871 20:59:39 -- bdev/nbd_common.sh@41 -- # break 00:13:18.871 20:59:39 -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.871 20:59:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.871 20:59:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:19.127 20:59:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:19.127 20:59:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:19.127 20:59:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:19.127 20:59:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.127 20:59:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.127 20:59:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:19.127 20:59:40 -- bdev/nbd_common.sh@41 -- # break 00:13:19.127 20:59:40 -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.127 20:59:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.127 20:59:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:19.384 20:59:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:19.384 20:59:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:19.384 20:59:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:19.384 20:59:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.384 20:59:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.384 20:59:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:19.384 20:59:40 -- bdev/nbd_common.sh@41 -- # break 00:13:19.384 20:59:40 -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.384 20:59:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.384 20:59:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@41 -- # break 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.640 20:59:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@65 -- # true 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@65 -- # count=0 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@122 -- # count=0 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@127 -- # return 0 00:13:19.898 20:59:40 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@12 -- # local i 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:19.898 20:59:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:20.155 /dev/nbd0 00:13:20.155 20:59:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:20.155 20:59:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:20.155 20:59:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:20.155 20:59:41 -- common/autotest_common.sh@867 -- # local i 00:13:20.155 20:59:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:20.155 20:59:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:20.155 20:59:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:20.414 20:59:41 -- common/autotest_common.sh@871 -- # break 00:13:20.414 20:59:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:20.414 20:59:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:20.414 20:59:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.414 1+0 records in 00:13:20.414 1+0 records out 00:13:20.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393944 s, 10.4 MB/s 00:13:20.414 20:59:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.414 20:59:41 -- common/autotest_common.sh@884 -- # size=4096 00:13:20.414 20:59:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.414 20:59:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:20.414 20:59:41 -- common/autotest_common.sh@887 -- # return 0 00:13:20.414 20:59:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.414 20:59:41 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.414 20:59:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:20.414 /dev/nbd1 00:13:20.414 20:59:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:20.414 20:59:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:20.414 20:59:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:20.414 20:59:41 -- common/autotest_common.sh@867 -- # local i 00:13:20.414 20:59:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:20.414 20:59:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:20.414 20:59:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:20.414 20:59:41 -- common/autotest_common.sh@871 -- # break 00:13:20.414 20:59:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:20.414 20:59:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:20.414 20:59:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.414 1+0 records in 00:13:20.414 1+0 records out 00:13:20.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568735 s, 7.2 MB/s 00:13:20.414 20:59:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.414 20:59:41 -- common/autotest_common.sh@884 -- # size=4096 00:13:20.414 20:59:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.414 20:59:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:20.414 20:59:41 -- common/autotest_common.sh@887 -- # return 0 00:13:20.414 20:59:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.414 20:59:41 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.414 20:59:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:13:20.673 /dev/nbd10 00:13:20.673 20:59:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:20.673 20:59:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:20.673 20:59:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:13:20.673 20:59:41 -- common/autotest_common.sh@867 -- # local i 00:13:20.673 20:59:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:20.673 20:59:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:20.673 20:59:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:13:20.673 20:59:41 -- common/autotest_common.sh@871 -- # break 00:13:20.673 20:59:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:20.673 20:59:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:20.673 20:59:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.673 1+0 records in 00:13:20.673 1+0 records out 00:13:20.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526407 s, 7.8 MB/s 00:13:20.673 20:59:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.673 20:59:41 -- common/autotest_common.sh@884 -- # size=4096 00:13:20.673 20:59:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.673 20:59:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:20.673 20:59:41 -- common/autotest_common.sh@887 -- # return 0 00:13:20.673 20:59:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.673 20:59:41 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.673 20:59:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:13:20.932 /dev/nbd11 00:13:20.932 20:59:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:20.932 20:59:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:20.932 20:59:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:13:20.932 20:59:41 -- common/autotest_common.sh@867 -- # local i 00:13:20.932 20:59:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:20.932 20:59:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:20.932 20:59:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:13:20.932 20:59:41 -- common/autotest_common.sh@871 -- # break 00:13:20.932 20:59:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:20.932 20:59:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:20.932 20:59:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.932 1+0 records in 00:13:20.932 1+0 records out 00:13:20.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580876 s, 7.1 MB/s 00:13:20.932 20:59:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.932 20:59:41 -- common/autotest_common.sh@884 -- # size=4096 00:13:20.932 20:59:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.932 20:59:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:20.932 20:59:41 -- common/autotest_common.sh@887 -- # return 0 00:13:20.932 20:59:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.932 20:59:41 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.932 20:59:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:13:21.191 /dev/nbd12 00:13:21.191 20:59:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:21.191 20:59:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:21.191 20:59:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:13:21.191 20:59:42 -- common/autotest_common.sh@867 -- # local i 00:13:21.191 20:59:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:21.191 20:59:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:21.191 20:59:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:13:21.191 20:59:42 -- common/autotest_common.sh@871 -- # break 00:13:21.191 20:59:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:21.191 20:59:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:21.191 20:59:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.191 1+0 records in 00:13:21.191 1+0 records out 00:13:21.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903097 s, 4.5 MB/s 00:13:21.191 20:59:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.191 20:59:42 -- common/autotest_common.sh@884 -- # size=4096 00:13:21.191 20:59:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.191 20:59:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:21.191 20:59:42 -- common/autotest_common.sh@887 -- # return 0 00:13:21.191 20:59:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.191 20:59:42 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:21.191 20:59:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:21.450 /dev/nbd13 00:13:21.450 20:59:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:21.450 20:59:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:21.450 20:59:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:13:21.450 20:59:42 -- common/autotest_common.sh@867 -- # local i 00:13:21.450 20:59:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:21.450 20:59:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:21.450 20:59:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:13:21.450 20:59:42 -- common/autotest_common.sh@871 -- # break 00:13:21.450 20:59:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:21.450 20:59:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:21.450 20:59:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.450 1+0 records in 00:13:21.450 1+0 records out 00:13:21.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000951203 s, 4.3 MB/s 00:13:21.450 20:59:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.450 20:59:42 -- common/autotest_common.sh@884 -- # size=4096 00:13:21.450 20:59:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.450 20:59:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:21.450 20:59:42 -- common/autotest_common.sh@887 -- # return 0 00:13:21.450 20:59:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.450 20:59:42 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:21.450 20:59:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:21.450 20:59:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:21.450 20:59:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:21.709 20:59:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd0", 00:13:21.709 "bdev_name": "nvme0n1" 00:13:21.709 }, 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd1", 00:13:21.709 "bdev_name": "nvme1n1" 00:13:21.709 }, 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd10", 00:13:21.709 "bdev_name": "nvme1n2" 00:13:21.709 }, 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd11", 00:13:21.709 "bdev_name": "nvme1n3" 00:13:21.709 }, 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd12", 00:13:21.709 "bdev_name": "nvme2n1" 00:13:21.709 }, 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd13", 00:13:21.709 "bdev_name": "nvme3n1" 00:13:21.709 } 00:13:21.709 ]' 00:13:21.709 20:59:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd0", 00:13:21.709 "bdev_name": "nvme0n1" 00:13:21.709 }, 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd1", 00:13:21.709 "bdev_name": "nvme1n1" 00:13:21.709 }, 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd10", 00:13:21.709 "bdev_name": "nvme1n2" 00:13:21.709 }, 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd11", 00:13:21.709 "bdev_name": "nvme1n3" 00:13:21.709 }, 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd12", 00:13:21.709 "bdev_name": "nvme2n1" 00:13:21.709 }, 00:13:21.709 { 00:13:21.709 "nbd_device": "/dev/nbd13", 00:13:21.709 "bdev_name": "nvme3n1" 00:13:21.709 } 00:13:21.709 ]' 00:13:21.709 20:59:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:21.968 /dev/nbd1 00:13:21.968 /dev/nbd10 00:13:21.968 /dev/nbd11 00:13:21.968 /dev/nbd12 00:13:21.968 /dev/nbd13' 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:21.968 /dev/nbd1 00:13:21.968 /dev/nbd10 00:13:21.968 /dev/nbd11 00:13:21.968 /dev/nbd12 00:13:21.968 /dev/nbd13' 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@65 -- # count=6 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@66 -- # echo 6 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@95 -- # count=6 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:21.968 256+0 records in 00:13:21.968 256+0 records out 00:13:21.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0078253 s, 134 MB/s 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:21.968 256+0 records in 00:13:21.968 256+0 records out 00:13:21.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164364 s, 6.4 MB/s 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:21.968 20:59:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:22.226 256+0 records in 00:13:22.226 256+0 records out 00:13:22.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160406 s, 6.5 MB/s 00:13:22.226 20:59:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:22.226 20:59:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:22.485 256+0 records in 00:13:22.485 256+0 records out 00:13:22.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16554 s, 6.3 MB/s 00:13:22.485 20:59:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:22.485 20:59:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:22.485 256+0 records in 00:13:22.485 256+0 records out 00:13:22.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143461 s, 7.3 MB/s 00:13:22.485 20:59:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:22.485 20:59:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:22.744 256+0 records in 00:13:22.744 256+0 records out 00:13:22.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183965 s, 5.7 MB/s 00:13:22.744 20:59:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:22.744 20:59:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:23.004 256+0 records in 00:13:23.004 256+0 records out 00:13:23.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162927 s, 6.4 MB/s 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@51 -- # local i 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.004 20:59:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:23.263 20:59:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:23.263 20:59:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:23.263 20:59:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:23.263 20:59:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.263 20:59:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.263 20:59:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:23.264 20:59:44 -- bdev/nbd_common.sh@41 -- # break 00:13:23.264 20:59:44 -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.264 20:59:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.264 20:59:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:23.522 20:59:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:23.522 20:59:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:23.522 20:59:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:23.522 20:59:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.522 20:59:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.522 20:59:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:23.522 20:59:44 -- bdev/nbd_common.sh@41 -- # break 00:13:23.522 20:59:44 -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.522 20:59:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.522 20:59:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:23.781 20:59:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:23.781 20:59:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:23.781 20:59:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:23.781 20:59:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.781 20:59:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.781 20:59:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:23.781 20:59:44 -- bdev/nbd_common.sh@41 -- # break 00:13:23.781 20:59:44 -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.781 20:59:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.781 20:59:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:24.039 20:59:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:24.039 20:59:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:24.039 20:59:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:24.039 20:59:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.039 20:59:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.039 20:59:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:24.039 20:59:44 -- bdev/nbd_common.sh@41 -- # break 00:13:24.039 20:59:44 -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.039 20:59:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.039 20:59:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:24.296 20:59:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:24.296 20:59:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:24.296 20:59:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:24.296 20:59:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.296 20:59:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.296 20:59:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:24.296 20:59:45 -- bdev/nbd_common.sh@41 -- # break 00:13:24.296 20:59:45 -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.296 20:59:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.296 20:59:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@41 -- # break 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.553 20:59:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@65 -- # true 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@65 -- # count=0 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@104 -- # count=0 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@109 -- # return 0 00:13:24.810 20:59:45 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.810 20:59:45 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:24.811 20:59:45 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:24.811 20:59:45 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:24.811 20:59:45 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:25.068 malloc_lvol_verify 00:13:25.068 20:59:46 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:25.326 3e1d9141-3937-44a6-8edf-e85c45e58181 00:13:25.326 20:59:46 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:25.584 8468e95b-c983-4d3e-a7b6-6619fbf6c858 00:13:25.584 20:59:46 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:25.843 /dev/nbd0 00:13:25.843 20:59:46 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:25.843 mke2fs 1.47.0 (5-Feb-2023) 00:13:25.843 Discarding device blocks: 0/4096 done 00:13:25.843 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:25.843 00:13:25.843 Allocating group tables: 0/1 done 00:13:25.843 Writing inode tables: 0/1 done 00:13:25.843 Creating journal (1024 blocks): done 00:13:25.843 Writing superblocks and filesystem accounting information: 0/1 done 00:13:25.843 00:13:25.843 20:59:46 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:25.843 20:59:46 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:25.843 20:59:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.843 20:59:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:25.843 20:59:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.843 20:59:46 -- bdev/nbd_common.sh@51 -- # local i 00:13:25.843 20:59:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.843 20:59:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:26.102 20:59:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.102 20:59:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.102 20:59:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.102 20:59:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.102 20:59:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.102 20:59:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.102 20:59:46 -- bdev/nbd_common.sh@41 -- # break 00:13:26.102 20:59:46 -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.102 20:59:46 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:26.102 20:59:46 -- bdev/nbd_common.sh@147 -- # return 0 00:13:26.102 20:59:46 -- bdev/blockdev.sh@324 -- # killprocess 68425 00:13:26.102 20:59:46 -- common/autotest_common.sh@936 -- # '[' -z 68425 ']' 00:13:26.102 20:59:46 -- common/autotest_common.sh@940 -- # kill -0 68425 00:13:26.102 20:59:46 -- common/autotest_common.sh@941 -- # uname 00:13:26.102 20:59:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:26.102 20:59:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68425 00:13:26.102 20:59:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:26.102 20:59:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:26.102 killing process with pid 68425 00:13:26.102 20:59:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68425' 00:13:26.102 20:59:46 -- common/autotest_common.sh@955 -- # kill 68425 00:13:26.102 20:59:46 -- common/autotest_common.sh@960 -- # wait 68425 00:13:27.039 20:59:47 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:13:27.039 00:13:27.039 real 0m11.717s 00:13:27.039 user 0m16.516s 00:13:27.039 sys 0m3.850s 00:13:27.039 20:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:27.039 20:59:47 -- common/autotest_common.sh@10 -- # set +x 00:13:27.039 ************************************ 00:13:27.039 END TEST bdev_nbd 00:13:27.039 ************************************ 00:13:27.039 20:59:47 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:13:27.039 20:59:47 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:13:27.039 20:59:47 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:13:27.039 20:59:47 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:13:27.039 20:59:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:27.039 20:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.039 20:59:47 -- common/autotest_common.sh@10 -- # set +x 00:13:27.039 ************************************ 00:13:27.039 START TEST bdev_fio 00:13:27.039 ************************************ 00:13:27.039 20:59:47 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:13:27.039 20:59:47 -- bdev/blockdev.sh@329 -- # local env_context 00:13:27.039 20:59:47 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:27.039 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:27.039 20:59:47 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:27.039 20:59:47 -- bdev/blockdev.sh@337 -- # echo '' 00:13:27.039 20:59:47 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:13:27.039 20:59:47 -- bdev/blockdev.sh@337 -- # env_context= 00:13:27.039 20:59:47 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:27.039 20:59:47 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:27.039 20:59:47 -- common/autotest_common.sh@1270 -- # local workload=verify 00:13:27.039 20:59:47 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:13:27.039 20:59:47 -- common/autotest_common.sh@1272 -- # local env_context= 00:13:27.039 20:59:47 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:13:27.039 20:59:47 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:27.039 20:59:47 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:13:27.039 20:59:47 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:13:27.039 20:59:47 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:27.039 20:59:47 -- common/autotest_common.sh@1290 -- # cat 00:13:27.039 20:59:47 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:13:27.039 20:59:47 -- common/autotest_common.sh@1303 -- # cat 00:13:27.039 20:59:47 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:13:27.039 20:59:47 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:13:27.039 20:59:47 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:27.039 20:59:47 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:13:27.039 20:59:47 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:27.039 20:59:47 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:13:27.039 20:59:47 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:13:27.039 20:59:47 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:27.039 20:59:47 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:13:27.039 20:59:47 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:13:27.039 20:59:47 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:27.039 20:59:47 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:13:27.039 20:59:47 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:13:27.040 20:59:47 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:27.040 20:59:47 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:13:27.040 20:59:47 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:13:27.040 20:59:47 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:27.040 20:59:47 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:13:27.040 20:59:47 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:13:27.040 20:59:47 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:27.040 20:59:47 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:13:27.040 20:59:47 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:13:27.040 20:59:47 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:27.040 20:59:47 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.040 20:59:47 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:27.040 20:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.040 20:59:47 -- common/autotest_common.sh@10 -- # set +x 00:13:27.040 ************************************ 00:13:27.040 START TEST bdev_fio_rw_verify 00:13:27.040 ************************************ 00:13:27.040 20:59:47 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.040 20:59:47 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.040 20:59:47 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:27.040 20:59:47 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:27.040 20:59:47 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:27.040 20:59:47 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.040 20:59:47 -- common/autotest_common.sh@1330 -- # shift 00:13:27.040 20:59:47 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:27.040 20:59:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:27.040 20:59:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.040 20:59:47 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:27.040 20:59:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:27.040 20:59:48 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:27.040 20:59:48 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:27.040 20:59:48 -- common/autotest_common.sh@1336 -- # break 00:13:27.040 20:59:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:27.040 20:59:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.299 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.299 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.299 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.299 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.299 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.299 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.299 fio-3.35 00:13:27.299 Starting 6 threads 00:13:39.505 00:13:39.505 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68839: Sun Dec 8 20:59:58 2024 00:13:39.505 read: IOPS=28.9k, BW=113MiB/s (118MB/s)(1129MiB/10001msec) 00:13:39.505 slat (usec): min=2, max=1826, avg= 7.03, stdev= 6.32 00:13:39.505 clat (usec): min=100, max=3778, avg=650.26, stdev=215.43 00:13:39.505 lat (usec): min=107, max=3788, avg=657.29, stdev=216.42 00:13:39.505 clat percentiles (usec): 00:13:39.505 | 50.000th=[ 693], 99.000th=[ 1123], 99.900th=[ 1647], 99.990th=[ 3163], 00:13:39.505 | 99.999th=[ 3752] 00:13:39.505 write: IOPS=29.3k, BW=114MiB/s (120MB/s)(1143MiB/10001msec); 0 zone resets 00:13:39.505 slat (usec): min=8, max=1159, avg=24.99, stdev=23.94 00:13:39.505 clat (usec): min=86, max=4217, avg=737.49, stdev=213.08 00:13:39.505 lat (usec): min=117, max=4245, avg=762.48, stdev=214.46 00:13:39.505 clat percentiles (usec): 00:13:39.505 | 50.000th=[ 758], 99.000th=[ 1270], 99.900th=[ 1696], 99.990th=[ 2245], 00:13:39.506 | 99.999th=[ 4178] 00:13:39.506 bw ( KiB/s): min=99008, max=142272, per=100.00%, avg=117831.00, stdev=2116.23, samples=114 00:13:39.506 iops : min=24752, max=35568, avg=29457.53, stdev=529.05, samples=114 00:13:39.506 lat (usec) : 100=0.01%, 250=2.54%, 500=17.16%, 750=36.26%, 1000=38.50% 00:13:39.506 lat (msec) : 2=5.52%, 4=0.02%, 10=0.01% 00:13:39.506 cpu : usr=61.32%, sys=26.21%, ctx=7791, majf=0, minf=26415 00:13:39.506 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.506 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.506 issued rwts: total=288973,292674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.506 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:39.506 00:13:39.506 Run status group 0 (all jobs): 00:13:39.506 READ: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=1129MiB (1184MB), run=10001-10001msec 00:13:39.506 WRITE: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=1143MiB (1199MB), run=10001-10001msec 00:13:39.506 ----------------------------------------------------- 00:13:39.506 Suppressions used: 00:13:39.506 count bytes template 00:13:39.506 6 48 /usr/src/fio/parse.c 00:13:39.506 3514 337344 /usr/src/fio/iolog.c 00:13:39.506 1 8 libtcmalloc_minimal.so 00:13:39.506 1 904 libcrypto.so 00:13:39.506 ----------------------------------------------------- 00:13:39.506 00:13:39.506 00:13:39.506 real 0m12.056s 00:13:39.506 user 0m38.497s 00:13:39.506 sys 0m16.119s 00:13:39.506 21:00:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:39.506 21:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:39.506 ************************************ 00:13:39.506 END TEST bdev_fio_rw_verify 00:13:39.506 ************************************ 00:13:39.506 21:00:00 -- bdev/blockdev.sh@348 -- # rm -f 00:13:39.506 21:00:00 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.506 21:00:00 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:39.506 21:00:00 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.506 21:00:00 -- common/autotest_common.sh@1270 -- # local workload=trim 00:13:39.506 21:00:00 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:13:39.506 21:00:00 -- common/autotest_common.sh@1272 -- # local env_context= 00:13:39.506 21:00:00 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:13:39.506 21:00:00 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:39.506 21:00:00 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:13:39.506 21:00:00 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:13:39.506 21:00:00 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.506 21:00:00 -- common/autotest_common.sh@1290 -- # cat 00:13:39.506 21:00:00 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:13:39.506 21:00:00 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:13:39.506 21:00:00 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:13:39.506 21:00:00 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:39.506 21:00:00 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1ad05498-2ecc-4742-a41b-61798788900b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1ad05498-2ecc-4742-a41b-61798788900b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "82542689-8280-4bbf-9a9c-9408a1045fa1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "82542689-8280-4bbf-9a9c-9408a1045fa1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "a8997a4e-575c-4608-8560-5dffd205fbde"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8997a4e-575c-4608-8560-5dffd205fbde",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "beb259f4-d630-4823-a64c-d763309a5ac2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "beb259f4-d630-4823-a64c-d763309a5ac2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a0d41d4c-6092-4826-9386-0787692d9f2c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a0d41d4c-6092-4826-9386-0787692d9f2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5323c12f-b5b4-4a01-af72-8f3c710dd116"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5323c12f-b5b4-4a01-af72-8f3c710dd116",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:39.506 21:00:00 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:13:39.506 21:00:00 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.506 21:00:00 -- bdev/blockdev.sh@360 -- # popd 00:13:39.506 /home/vagrant/spdk_repo/spdk 00:13:39.506 21:00:00 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:13:39.506 21:00:00 -- bdev/blockdev.sh@362 -- # return 0 00:13:39.506 00:13:39.506 real 0m12.248s 00:13:39.506 user 0m38.603s 00:13:39.506 sys 0m16.205s 00:13:39.506 21:00:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:39.506 21:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:39.506 ************************************ 00:13:39.506 END TEST bdev_fio 00:13:39.506 ************************************ 00:13:39.506 21:00:00 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:39.506 21:00:00 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:39.506 21:00:00 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:39.506 21:00:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.506 21:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:39.506 ************************************ 00:13:39.506 START TEST bdev_verify 00:13:39.506 ************************************ 00:13:39.506 21:00:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:39.506 [2024-12-08 21:00:00.311228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:39.506 [2024-12-08 21:00:00.312177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69010 ] 00:13:39.506 [2024-12-08 21:00:00.481071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:39.766 [2024-12-08 21:00:00.635000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.766 [2024-12-08 21:00:00.635028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.025 Running I/O for 5 seconds... 00:13:45.355 00:13:45.355 Latency(us) 00:13:45.355 [2024-12-08T21:00:06.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.355 [2024-12-08T21:00:06.398Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.355 Verification LBA range: start 0x0 length 0x20000 00:13:45.355 nvme0n1 : 5.07 2618.90 10.23 0.00 0.00 48631.32 15728.64 65297.69 00:13:45.355 [2024-12-08T21:00:06.398Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.355 Verification LBA range: start 0x20000 length 0x20000 00:13:45.355 nvme0n1 : 5.09 2734.76 10.68 0.00 0.00 46595.43 5689.72 66727.56 00:13:45.355 [2024-12-08T21:00:06.398Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.355 Verification LBA range: start 0x0 length 0x80000 00:13:45.355 nvme1n1 : 5.06 2563.08 10.01 0.00 0.00 49718.82 18111.77 75783.45 00:13:45.355 [2024-12-08T21:00:06.398Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.355 Verification LBA range: start 0x80000 length 0x80000 00:13:45.355 nvme1n1 : 5.09 2797.63 10.93 0.00 0.00 45524.78 5123.72 58148.31 00:13:45.355 [2024-12-08T21:00:06.398Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.355 Verification LBA range: start 0x0 length 0x80000 00:13:45.355 nvme1n2 : 5.07 2659.04 10.39 0.00 0.00 47829.41 4259.84 72923.69 00:13:45.355 [2024-12-08T21:00:06.398Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.355 Verification LBA range: start 0x80000 length 0x80000 00:13:45.355 nvme1n2 : 5.09 2746.78 10.73 0.00 0.00 46322.04 5481.19 72447.07 00:13:45.355 [2024-12-08T21:00:06.398Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.355 Verification LBA range: start 0x0 length 0x80000 00:13:45.355 nvme1n3 : 5.08 2575.06 10.06 0.00 0.00 49386.33 3440.64 70540.57 00:13:45.355 [2024-12-08T21:00:06.398Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.355 Verification LBA range: start 0x80000 length 0x80000 00:13:45.355 nvme1n3 : 5.09 2629.88 10.27 0.00 0.00 48311.72 3649.16 63867.81 00:13:45.355 [2024-12-08T21:00:06.398Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.355 Verification LBA range: start 0x0 length 0xbd0bd 00:13:45.355 nvme2n1 : 5.08 2846.32 11.12 0.00 0.00 44655.42 6851.49 65774.31 00:13:45.355 [2024-12-08T21:00:06.398Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.355 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:45.356 nvme2n1 : 5.09 3145.83 12.29 0.00 0.00 40310.88 4706.68 64821.06 00:13:45.356 [2024-12-08T21:00:06.399Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.356 Verification LBA range: start 0x0 length 0xa0000 00:13:45.356 nvme3n1 : 5.08 2573.09 10.05 0.00 0.00 49256.70 3336.38 71017.19 00:13:45.356 [2024-12-08T21:00:06.399Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.356 Verification LBA range: start 0xa0000 length 0xa0000 00:13:45.356 nvme3n1 : 5.09 2674.89 10.45 0.00 0.00 47349.67 3991.74 66250.94 00:13:45.356 [2024-12-08T21:00:06.399Z] =================================================================================================================== 00:13:45.356 [2024-12-08T21:00:06.399Z] Total : 32565.24 127.21 0.00 0.00 46843.07 3336.38 75783.45 00:13:46.294 00:13:46.294 real 0m6.858s 00:13:46.294 user 0m8.587s 00:13:46.294 sys 0m3.477s 00:13:46.294 21:00:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:46.294 21:00:07 -- common/autotest_common.sh@10 -- # set +x 00:13:46.294 ************************************ 00:13:46.294 END TEST bdev_verify 00:13:46.294 ************************************ 00:13:46.294 21:00:07 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:46.294 21:00:07 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:46.294 21:00:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:46.294 21:00:07 -- common/autotest_common.sh@10 -- # set +x 00:13:46.294 ************************************ 00:13:46.294 START TEST bdev_verify_big_io 00:13:46.294 ************************************ 00:13:46.294 21:00:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:46.294 [2024-12-08 21:00:07.195007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:46.294 [2024-12-08 21:00:07.195171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69112 ] 00:13:46.553 [2024-12-08 21:00:07.351270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:46.553 [2024-12-08 21:00:07.500767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.553 [2024-12-08 21:00:07.500781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.121 Running I/O for 5 seconds... 00:13:53.686 00:13:53.686 Latency(us) 00:13:53.686 [2024-12-08T21:00:14.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0x0 length 0x2000 00:13:53.686 nvme0n1 : 5.62 295.15 18.45 0.00 0.00 421060.02 46709.29 583389.56 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0x2000 length 0x2000 00:13:53.686 nvme0n1 : 5.59 280.25 17.52 0.00 0.00 440905.84 49092.42 632958.60 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0x0 length 0x8000 00:13:53.686 nvme1n1 : 5.60 279.61 17.48 0.00 0.00 436359.58 45994.36 526194.50 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0x8000 length 0x8000 00:13:53.686 nvme1n1 : 5.54 266.82 16.68 0.00 0.00 457059.71 51713.86 571950.55 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0x0 length 0x8000 00:13:53.686 nvme1n2 : 5.62 310.83 19.43 0.00 0.00 392358.06 13643.40 463279.94 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0x8000 length 0x8000 00:13:53.686 nvme1n2 : 5.59 296.57 18.54 0.00 0.00 408195.73 50522.30 503316.48 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0x0 length 0x8000 00:13:53.686 nvme1n3 : 5.63 310.67 19.42 0.00 0.00 384813.69 11856.06 402271.88 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0x8000 length 0x8000 00:13:53.686 nvme1n3 : 5.60 296.46 18.53 0.00 0.00 399362.36 49330.73 440401.92 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0x0 length 0xbd0b 00:13:53.686 nvme2n1 : 5.63 246.89 15.43 0.00 0.00 474312.83 5659.93 541446.52 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:53.686 nvme2n1 : 5.61 295.50 18.47 0.00 0.00 393863.19 7626.01 448027.93 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0x0 length 0xa000 00:13:53.686 nvme3n1 : 5.63 294.65 18.42 0.00 0.00 391467.60 10485.76 335544.32 00:13:53.686 [2024-12-08T21:00:14.729Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:53.686 Verification LBA range: start 0xa000 length 0xa000 00:13:53.687 nvme3n1 : 5.62 278.87 17.43 0.00 0.00 407980.33 9175.04 436588.92 00:13:53.687 [2024-12-08T21:00:14.730Z] =================================================================================================================== 00:13:53.687 [2024-12-08T21:00:14.730Z] Total : 3452.28 215.77 0.00 0.00 415728.57 5659.93 632958.60 00:13:53.946 00:13:53.946 real 0m7.701s 00:13:53.946 user 0m13.603s 00:13:53.946 sys 0m0.694s 00:13:53.946 21:00:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:53.946 21:00:14 -- common/autotest_common.sh@10 -- # set +x 00:13:53.946 ************************************ 00:13:53.946 END TEST bdev_verify_big_io 00:13:53.946 ************************************ 00:13:53.946 21:00:14 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.946 21:00:14 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:53.946 21:00:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.946 21:00:14 -- common/autotest_common.sh@10 -- # set +x 00:13:53.946 ************************************ 00:13:53.946 START TEST bdev_write_zeroes 00:13:53.946 ************************************ 00:13:53.946 21:00:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.946 [2024-12-08 21:00:14.971625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:53.946 [2024-12-08 21:00:14.971810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69222 ] 00:13:54.205 [2024-12-08 21:00:15.144004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.463 [2024-12-08 21:00:15.334166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.720 Running I/O for 1 seconds... 00:13:56.093 00:13:56.093 Latency(us) 00:13:56.093 [2024-12-08T21:00:17.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.093 [2024-12-08T21:00:17.136Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.093 nvme0n1 : 1.02 11895.53 46.47 0.00 0.00 10749.94 6851.49 21567.30 00:13:56.093 [2024-12-08T21:00:17.136Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.093 nvme1n1 : 1.02 11878.09 46.40 0.00 0.00 10758.85 7060.01 22520.55 00:13:56.093 [2024-12-08T21:00:17.136Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.093 nvme1n2 : 1.03 11860.68 46.33 0.00 0.00 10766.36 7119.59 23354.65 00:13:56.093 [2024-12-08T21:00:17.136Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.093 nvme1n3 : 1.02 11972.24 46.77 0.00 0.00 10658.53 6166.34 22520.55 00:13:56.093 [2024-12-08T21:00:17.136Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.093 nvme2n1 : 1.02 18594.64 72.64 0.00 0.00 6855.02 3455.53 12809.31 00:13:56.093 [2024-12-08T21:00:17.136Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.093 nvme3n1 : 1.03 11842.78 46.26 0.00 0.00 10723.36 4408.79 23712.12 00:13:56.093 [2024-12-08T21:00:17.136Z] =================================================================================================================== 00:13:56.093 [2024-12-08T21:00:17.136Z] Total : 78043.96 304.86 0.00 0.00 9809.35 3455.53 23712.12 00:13:56.661 00:13:56.661 real 0m2.733s 00:13:56.661 user 0m1.945s 00:13:56.661 sys 0m0.592s 00:13:56.661 21:00:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:56.661 21:00:17 -- common/autotest_common.sh@10 -- # set +x 00:13:56.661 ************************************ 00:13:56.661 END TEST bdev_write_zeroes 00:13:56.661 ************************************ 00:13:56.661 21:00:17 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:56.661 21:00:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:56.661 21:00:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:56.661 21:00:17 -- common/autotest_common.sh@10 -- # set +x 00:13:56.661 ************************************ 00:13:56.661 START TEST bdev_json_nonenclosed 00:13:56.661 ************************************ 00:13:56.661 21:00:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:56.920 [2024-12-08 21:00:17.752999] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:56.920 [2024-12-08 21:00:17.753197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69270 ] 00:13:56.920 [2024-12-08 21:00:17.923541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.180 [2024-12-08 21:00:18.071069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.180 [2024-12-08 21:00:18.071290] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:57.180 [2024-12-08 21:00:18.071317] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:57.440 00:13:57.440 real 0m0.719s 00:13:57.440 user 0m0.484s 00:13:57.440 sys 0m0.130s 00:13:57.440 21:00:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:57.440 21:00:18 -- common/autotest_common.sh@10 -- # set +x 00:13:57.440 ************************************ 00:13:57.440 END TEST bdev_json_nonenclosed 00:13:57.440 ************************************ 00:13:57.440 21:00:18 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.440 21:00:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:57.440 21:00:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.440 21:00:18 -- common/autotest_common.sh@10 -- # set +x 00:13:57.440 ************************************ 00:13:57.440 START TEST bdev_json_nonarray 00:13:57.440 ************************************ 00:13:57.440 21:00:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.700 [2024-12-08 21:00:18.532878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:57.700 [2024-12-08 21:00:18.533050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69295 ] 00:13:57.700 [2024-12-08 21:00:18.707139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.960 [2024-12-08 21:00:18.927502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.960 [2024-12-08 21:00:18.927691] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:57.960 [2024-12-08 21:00:18.927718] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:58.219 00:13:58.219 real 0m0.801s 00:13:58.219 user 0m0.563s 00:13:58.219 sys 0m0.132s 00:13:58.219 21:00:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:58.219 21:00:19 -- common/autotest_common.sh@10 -- # set +x 00:13:58.219 ************************************ 00:13:58.219 END TEST bdev_json_nonarray 00:13:58.219 ************************************ 00:13:58.478 21:00:19 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:58.478 21:00:19 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:58.478 21:00:19 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:58.478 21:00:19 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:58.478 21:00:19 -- bdev/blockdev.sh@809 -- # cleanup 00:13:58.478 21:00:19 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:58.478 21:00:19 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:58.478 21:00:19 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:58.478 21:00:19 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:58.478 21:00:19 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:58.478 21:00:19 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:58.478 21:00:19 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:59.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:59.415 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.415 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.415 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.676 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.676 00:13:59.676 real 0m57.582s 00:13:59.676 user 1m37.662s 00:13:59.676 sys 0m27.783s 00:13:59.676 21:00:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:59.676 21:00:20 -- common/autotest_common.sh@10 -- # set +x 00:13:59.676 ************************************ 00:13:59.676 END TEST blockdev_xnvme 00:13:59.676 ************************************ 00:13:59.676 21:00:20 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:59.676 21:00:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:59.676 21:00:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.676 21:00:20 -- common/autotest_common.sh@10 -- # set +x 00:13:59.676 ************************************ 00:13:59.676 START TEST ublk 00:13:59.676 ************************************ 00:13:59.676 21:00:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:59.676 * Looking for test storage... 00:13:59.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:59.676 21:00:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:59.676 21:00:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:59.676 21:00:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:59.935 21:00:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:59.936 21:00:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:59.936 21:00:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:59.936 21:00:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:59.936 21:00:20 -- scripts/common.sh@335 -- # IFS=.-: 00:13:59.936 21:00:20 -- scripts/common.sh@335 -- # read -ra ver1 00:13:59.936 21:00:20 -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.936 21:00:20 -- scripts/common.sh@336 -- # read -ra ver2 00:13:59.936 21:00:20 -- scripts/common.sh@337 -- # local 'op=<' 00:13:59.936 21:00:20 -- scripts/common.sh@339 -- # ver1_l=2 00:13:59.936 21:00:20 -- scripts/common.sh@340 -- # ver2_l=1 00:13:59.936 21:00:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:59.936 21:00:20 -- scripts/common.sh@343 -- # case "$op" in 00:13:59.936 21:00:20 -- scripts/common.sh@344 -- # : 1 00:13:59.936 21:00:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:59.936 21:00:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.936 21:00:20 -- scripts/common.sh@364 -- # decimal 1 00:13:59.936 21:00:20 -- scripts/common.sh@352 -- # local d=1 00:13:59.936 21:00:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.936 21:00:20 -- scripts/common.sh@354 -- # echo 1 00:13:59.936 21:00:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:59.936 21:00:20 -- scripts/common.sh@365 -- # decimal 2 00:13:59.936 21:00:20 -- scripts/common.sh@352 -- # local d=2 00:13:59.936 21:00:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.936 21:00:20 -- scripts/common.sh@354 -- # echo 2 00:13:59.936 21:00:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:59.936 21:00:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:59.936 21:00:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:59.936 21:00:20 -- scripts/common.sh@367 -- # return 0 00:13:59.936 21:00:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.936 21:00:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:59.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.936 --rc genhtml_branch_coverage=1 00:13:59.936 --rc genhtml_function_coverage=1 00:13:59.936 --rc genhtml_legend=1 00:13:59.936 --rc geninfo_all_blocks=1 00:13:59.936 --rc geninfo_unexecuted_blocks=1 00:13:59.936 00:13:59.936 ' 00:13:59.936 21:00:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:59.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.936 --rc genhtml_branch_coverage=1 00:13:59.936 --rc genhtml_function_coverage=1 00:13:59.936 --rc genhtml_legend=1 00:13:59.936 --rc geninfo_all_blocks=1 00:13:59.936 --rc geninfo_unexecuted_blocks=1 00:13:59.936 00:13:59.936 ' 00:13:59.936 21:00:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:59.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.936 --rc genhtml_branch_coverage=1 00:13:59.936 --rc genhtml_function_coverage=1 00:13:59.936 --rc genhtml_legend=1 00:13:59.936 --rc geninfo_all_blocks=1 00:13:59.936 --rc geninfo_unexecuted_blocks=1 00:13:59.936 00:13:59.936 ' 00:13:59.936 21:00:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:59.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.936 --rc genhtml_branch_coverage=1 00:13:59.936 --rc genhtml_function_coverage=1 00:13:59.936 --rc genhtml_legend=1 00:13:59.936 --rc geninfo_all_blocks=1 00:13:59.936 --rc geninfo_unexecuted_blocks=1 00:13:59.936 00:13:59.936 ' 00:13:59.936 21:00:20 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:59.936 21:00:20 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:59.936 21:00:20 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:59.936 21:00:20 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:59.936 21:00:20 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:59.936 21:00:20 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:59.936 21:00:20 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:59.936 21:00:20 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:59.936 21:00:20 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:59.936 21:00:20 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:59.936 21:00:20 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:59.936 21:00:20 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:59.936 21:00:20 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:59.936 21:00:20 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:59.936 21:00:20 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:59.936 21:00:20 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:59.936 21:00:20 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:59.936 21:00:20 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:59.936 21:00:20 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:59.936 21:00:20 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:59.936 21:00:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:59.936 21:00:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.936 21:00:20 -- common/autotest_common.sh@10 -- # set +x 00:13:59.936 ************************************ 00:13:59.936 START TEST test_save_ublk_config 00:13:59.936 ************************************ 00:13:59.936 21:00:20 -- common/autotest_common.sh@1114 -- # test_save_config 00:13:59.936 21:00:20 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:59.936 21:00:20 -- ublk/ublk.sh@103 -- # tgtpid=69590 00:13:59.936 21:00:20 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:59.936 21:00:20 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:59.936 21:00:20 -- ublk/ublk.sh@106 -- # waitforlisten 69590 00:13:59.936 21:00:20 -- common/autotest_common.sh@829 -- # '[' -z 69590 ']' 00:13:59.936 21:00:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.936 21:00:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.936 21:00:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.936 21:00:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.936 21:00:20 -- common/autotest_common.sh@10 -- # set +x 00:13:59.936 [2024-12-08 21:00:20.959465] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:59.936 [2024-12-08 21:00:20.959639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69590 ] 00:14:00.196 [2024-12-08 21:00:21.131991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.455 [2024-12-08 21:00:21.372044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:00.455 [2024-12-08 21:00:21.372345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.834 21:00:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.834 21:00:22 -- common/autotest_common.sh@862 -- # return 0 00:14:01.834 21:00:22 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:01.834 21:00:22 -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:01.834 21:00:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.834 21:00:22 -- common/autotest_common.sh@10 -- # set +x 00:14:01.834 [2024-12-08 21:00:22.590849] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:01.834 malloc0 00:14:01.834 [2024-12-08 21:00:22.653231] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:01.834 [2024-12-08 21:00:22.653333] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:01.834 [2024-12-08 21:00:22.653347] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:01.834 [2024-12-08 21:00:22.653357] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:01.834 [2024-12-08 21:00:22.658981] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:01.834 [2024-12-08 21:00:22.659016] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:01.834 [2024-12-08 21:00:22.668143] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:01.834 [2024-12-08 21:00:22.668252] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:01.834 [2024-12-08 21:00:22.692234] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:01.834 0 00:14:01.834 21:00:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.834 21:00:22 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:01.834 21:00:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.834 21:00:22 -- common/autotest_common.sh@10 -- # set +x 00:14:02.094 21:00:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.094 21:00:22 -- ublk/ublk.sh@115 -- # config='{ 00:14:02.094 "subsystems": [ 00:14:02.094 { 00:14:02.094 "subsystem": "iobuf", 00:14:02.094 "config": [ 00:14:02.094 { 00:14:02.094 "method": "iobuf_set_options", 00:14:02.094 "params": { 00:14:02.094 "small_pool_count": 8192, 00:14:02.094 "large_pool_count": 1024, 00:14:02.094 "small_bufsize": 8192, 00:14:02.094 "large_bufsize": 135168 00:14:02.094 } 00:14:02.094 } 00:14:02.094 ] 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "subsystem": "sock", 00:14:02.094 "config": [ 00:14:02.094 { 00:14:02.094 "method": "sock_impl_set_options", 00:14:02.094 "params": { 00:14:02.094 "impl_name": "posix", 00:14:02.094 "recv_buf_size": 2097152, 00:14:02.094 "send_buf_size": 2097152, 00:14:02.094 "enable_recv_pipe": true, 00:14:02.094 "enable_quickack": false, 00:14:02.094 "enable_placement_id": 0, 00:14:02.094 "enable_zerocopy_send_server": true, 00:14:02.094 "enable_zerocopy_send_client": false, 00:14:02.094 "zerocopy_threshold": 0, 00:14:02.094 "tls_version": 0, 00:14:02.094 "enable_ktls": false 00:14:02.094 } 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "method": "sock_impl_set_options", 00:14:02.094 "params": { 00:14:02.094 "impl_name": "ssl", 00:14:02.094 "recv_buf_size": 4096, 00:14:02.094 "send_buf_size": 4096, 00:14:02.094 "enable_recv_pipe": true, 00:14:02.094 "enable_quickack": false, 00:14:02.094 "enable_placement_id": 0, 00:14:02.094 "enable_zerocopy_send_server": true, 00:14:02.094 "enable_zerocopy_send_client": false, 00:14:02.094 "zerocopy_threshold": 0, 00:14:02.094 "tls_version": 0, 00:14:02.094 "enable_ktls": false 00:14:02.094 } 00:14:02.094 } 00:14:02.094 ] 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "subsystem": "vmd", 00:14:02.094 "config": [] 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "subsystem": "accel", 00:14:02.094 "config": [ 00:14:02.094 { 00:14:02.094 "method": "accel_set_options", 00:14:02.094 "params": { 00:14:02.094 "small_cache_size": 128, 00:14:02.094 "large_cache_size": 16, 00:14:02.094 "task_count": 2048, 00:14:02.094 "sequence_count": 2048, 00:14:02.094 "buf_count": 2048 00:14:02.094 } 00:14:02.094 } 00:14:02.094 ] 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "subsystem": "bdev", 00:14:02.094 "config": [ 00:14:02.094 { 00:14:02.094 "method": "bdev_set_options", 00:14:02.094 "params": { 00:14:02.094 "bdev_io_pool_size": 65535, 00:14:02.094 "bdev_io_cache_size": 256, 00:14:02.094 "bdev_auto_examine": true, 00:14:02.094 "iobuf_small_cache_size": 128, 00:14:02.094 "iobuf_large_cache_size": 16 00:14:02.094 } 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "method": "bdev_raid_set_options", 00:14:02.094 "params": { 00:14:02.094 "process_window_size_kb": 1024 00:14:02.094 } 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "method": "bdev_iscsi_set_options", 00:14:02.094 "params": { 00:14:02.094 "timeout_sec": 30 00:14:02.094 } 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "method": "bdev_nvme_set_options", 00:14:02.094 "params": { 00:14:02.094 "action_on_timeout": "none", 00:14:02.094 "timeout_us": 0, 00:14:02.094 "timeout_admin_us": 0, 00:14:02.094 "keep_alive_timeout_ms": 10000, 00:14:02.094 "transport_retry_count": 4, 00:14:02.094 "arbitration_burst": 0, 00:14:02.094 "low_priority_weight": 0, 00:14:02.094 "medium_priority_weight": 0, 00:14:02.094 "high_priority_weight": 0, 00:14:02.094 "nvme_adminq_poll_period_us": 10000, 00:14:02.094 "nvme_ioq_poll_period_us": 0, 00:14:02.094 "io_queue_requests": 0, 00:14:02.094 "delay_cmd_submit": true, 00:14:02.094 "bdev_retry_count": 3, 00:14:02.094 "transport_ack_timeout": 0, 00:14:02.094 "ctrlr_loss_timeout_sec": 0, 00:14:02.094 "reconnect_delay_sec": 0, 00:14:02.094 "fast_io_fail_timeout_sec": 0, 00:14:02.094 "generate_uuids": false, 00:14:02.094 "transport_tos": 0, 00:14:02.094 "io_path_stat": false, 00:14:02.094 "allow_accel_sequence": false 00:14:02.094 } 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "method": "bdev_nvme_set_hotplug", 00:14:02.094 "params": { 00:14:02.094 "period_us": 100000, 00:14:02.094 "enable": false 00:14:02.094 } 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "method": "bdev_malloc_create", 00:14:02.094 "params": { 00:14:02.094 "name": "malloc0", 00:14:02.094 "num_blocks": 8192, 00:14:02.094 "block_size": 4096, 00:14:02.094 "physical_block_size": 4096, 00:14:02.094 "uuid": "a34b52d6-b0a8-4d53-a19c-1bba99cc2390", 00:14:02.094 "optimal_io_boundary": 0 00:14:02.094 } 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "method": "bdev_wait_for_examine" 00:14:02.094 } 00:14:02.094 ] 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "subsystem": "scsi", 00:14:02.094 "config": null 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "subsystem": "scheduler", 00:14:02.094 "config": [ 00:14:02.094 { 00:14:02.094 "method": "framework_set_scheduler", 00:14:02.094 "params": { 00:14:02.094 "name": "static" 00:14:02.094 } 00:14:02.094 } 00:14:02.094 ] 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "subsystem": "vhost_scsi", 00:14:02.094 "config": [] 00:14:02.094 }, 00:14:02.095 { 00:14:02.095 "subsystem": "vhost_blk", 00:14:02.095 "config": [] 00:14:02.095 }, 00:14:02.095 { 00:14:02.095 "subsystem": "ublk", 00:14:02.095 "config": [ 00:14:02.095 { 00:14:02.095 "method": "ublk_create_target", 00:14:02.095 "params": { 00:14:02.095 "cpumask": "1" 00:14:02.095 } 00:14:02.095 }, 00:14:02.095 { 00:14:02.095 "method": "ublk_start_disk", 00:14:02.095 "params": { 00:14:02.095 "bdev_name": "malloc0", 00:14:02.095 "ublk_id": 0, 00:14:02.095 "num_queues": 1, 00:14:02.095 "queue_depth": 128 00:14:02.095 } 00:14:02.095 } 00:14:02.095 ] 00:14:02.095 }, 00:14:02.095 { 00:14:02.095 "subsystem": "nbd", 00:14:02.095 "config": [] 00:14:02.095 }, 00:14:02.095 { 00:14:02.095 "subsystem": "nvmf", 00:14:02.095 "config": [ 00:14:02.095 { 00:14:02.095 "method": "nvmf_set_config", 00:14:02.095 "params": { 00:14:02.095 "discovery_filter": "match_any", 00:14:02.095 "admin_cmd_passthru": { 00:14:02.095 "identify_ctrlr": false 00:14:02.095 } 00:14:02.095 } 00:14:02.095 }, 00:14:02.095 { 00:14:02.095 "method": "nvmf_set_max_subsystems", 00:14:02.095 "params": { 00:14:02.095 "max_subsystems": 1024 00:14:02.095 } 00:14:02.095 }, 00:14:02.095 { 00:14:02.095 "method": "nvmf_set_crdt", 00:14:02.095 "params": { 00:14:02.095 "crdt1": 0, 00:14:02.095 "crdt2": 0, 00:14:02.095 "crdt3": 0 00:14:02.095 } 00:14:02.095 } 00:14:02.095 ] 00:14:02.095 }, 00:14:02.095 { 00:14:02.095 "subsystem": "iscsi", 00:14:02.095 "config": [ 00:14:02.095 { 00:14:02.095 "method": "iscsi_set_options", 00:14:02.095 "params": { 00:14:02.095 "node_base": "iqn.2016-06.io.spdk", 00:14:02.095 "max_sessions": 128, 00:14:02.095 "max_connections_per_session": 2, 00:14:02.095 "max_queue_depth": 64, 00:14:02.095 "default_time2wait": 2, 00:14:02.095 "default_time2retain": 20, 00:14:02.095 "first_burst_length": 8192, 00:14:02.095 "immediate_data": true, 00:14:02.095 "allow_duplicated_isid": false, 00:14:02.095 "error_recovery_level": 0, 00:14:02.095 "nop_timeout": 60, 00:14:02.095 "nop_in_interval": 30, 00:14:02.095 "disable_chap": false, 00:14:02.095 "require_chap": false, 00:14:02.095 "mutual_chap": false, 00:14:02.095 "chap_group": 0, 00:14:02.095 "max_large_datain_per_connection": 64, 00:14:02.095 "max_r2t_per_connection": 4, 00:14:02.095 "pdu_pool_size": 36864, 00:14:02.095 "immediate_data_pool_size": 16384, 00:14:02.095 "data_out_pool_size": 2048 00:14:02.095 } 00:14:02.095 } 00:14:02.095 ] 00:14:02.095 } 00:14:02.095 ] 00:14:02.095 }' 00:14:02.095 21:00:22 -- ublk/ublk.sh@116 -- # killprocess 69590 00:14:02.095 21:00:22 -- common/autotest_common.sh@936 -- # '[' -z 69590 ']' 00:14:02.095 21:00:22 -- common/autotest_common.sh@940 -- # kill -0 69590 00:14:02.095 21:00:22 -- common/autotest_common.sh@941 -- # uname 00:14:02.095 21:00:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:02.095 21:00:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69590 00:14:02.095 21:00:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:02.095 21:00:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:02.095 killing process with pid 69590 00:14:02.095 21:00:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69590' 00:14:02.095 21:00:22 -- common/autotest_common.sh@955 -- # kill 69590 00:14:02.095 21:00:22 -- common/autotest_common.sh@960 -- # wait 69590 00:14:03.034 [2024-12-08 21:00:24.051325] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:03.294 [2024-12-08 21:00:24.078310] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:03.294 [2024-12-08 21:00:24.078508] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:03.294 [2024-12-08 21:00:24.086219] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:03.294 [2024-12-08 21:00:24.086295] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:03.294 [2024-12-08 21:00:24.086312] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:03.294 [2024-12-08 21:00:24.086351] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:03.294 [2024-12-08 21:00:24.086546] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:04.670 21:00:25 -- ublk/ublk.sh@119 -- # tgtpid=69652 00:14:04.670 21:00:25 -- ublk/ublk.sh@121 -- # waitforlisten 69652 00:14:04.670 21:00:25 -- common/autotest_common.sh@829 -- # '[' -z 69652 ']' 00:14:04.670 21:00:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.670 21:00:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.670 21:00:25 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:04.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.670 21:00:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.670 21:00:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.670 21:00:25 -- ublk/ublk.sh@118 -- # echo '{ 00:14:04.670 "subsystems": [ 00:14:04.670 { 00:14:04.670 "subsystem": "iobuf", 00:14:04.670 "config": [ 00:14:04.670 { 00:14:04.670 "method": "iobuf_set_options", 00:14:04.670 "params": { 00:14:04.670 "small_pool_count": 8192, 00:14:04.670 "large_pool_count": 1024, 00:14:04.670 "small_bufsize": 8192, 00:14:04.670 "large_bufsize": 135168 00:14:04.670 } 00:14:04.670 } 00:14:04.670 ] 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "subsystem": "sock", 00:14:04.670 "config": [ 00:14:04.670 { 00:14:04.670 "method": "sock_impl_set_options", 00:14:04.670 "params": { 00:14:04.670 "impl_name": "posix", 00:14:04.670 "recv_buf_size": 2097152, 00:14:04.670 "send_buf_size": 2097152, 00:14:04.670 "enable_recv_pipe": true, 00:14:04.670 "enable_quickack": false, 00:14:04.670 "enable_placement_id": 0, 00:14:04.670 "enable_zerocopy_send_server": true, 00:14:04.670 "enable_zerocopy_send_client": false, 00:14:04.670 "zerocopy_threshold": 0, 00:14:04.670 "tls_version": 0, 00:14:04.670 "enable_ktls": false 00:14:04.670 } 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "method": "sock_impl_set_options", 00:14:04.670 "params": { 00:14:04.670 "impl_name": "ssl", 00:14:04.670 "recv_buf_size": 4096, 00:14:04.670 "send_buf_size": 4096, 00:14:04.670 "enable_recv_pipe": true, 00:14:04.670 "enable_quickack": false, 00:14:04.670 "enable_placement_id": 0, 00:14:04.670 "enable_zerocopy_send_server": true, 00:14:04.670 "enable_zerocopy_send_client": false, 00:14:04.670 "zerocopy_threshold": 0, 00:14:04.670 "tls_version": 0, 00:14:04.670 "enable_ktls": false 00:14:04.670 } 00:14:04.670 } 00:14:04.670 ] 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "subsystem": "vmd", 00:14:04.670 "config": [] 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "subsystem": "accel", 00:14:04.670 "config": [ 00:14:04.670 { 00:14:04.670 "method": "accel_set_options", 00:14:04.670 "params": { 00:14:04.670 "small_cache_size": 128, 00:14:04.670 "large_cache_size": 16, 00:14:04.670 "task_count": 2048, 00:14:04.670 "sequence_count": 2048, 00:14:04.670 "buf_count": 2048 00:14:04.670 } 00:14:04.670 } 00:14:04.670 ] 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "subsystem": "bdev", 00:14:04.670 "config": [ 00:14:04.670 { 00:14:04.670 "method": "bdev_set_options", 00:14:04.670 "params": { 00:14:04.670 "bdev_io_pool_size": 65535, 00:14:04.670 "bdev_io_cache_size": 256, 00:14:04.670 "bdev_auto_examine": true, 00:14:04.670 "iobuf_small_cache_size": 128, 00:14:04.670 "iobuf_large_cache_size": 16 00:14:04.670 } 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "method": "bdev_raid_set_options", 00:14:04.670 "params": { 00:14:04.670 "process_window_size_kb": 1024 00:14:04.670 } 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "method": "bdev_iscsi_set_options", 00:14:04.670 "params": { 00:14:04.670 "timeout_sec": 30 00:14:04.670 } 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "method": "bdev_nvme_set_options", 00:14:04.670 "params": { 00:14:04.670 "action_on_timeout": "none", 00:14:04.670 "timeout_us": 0, 00:14:04.670 "timeout_admin_us": 0, 00:14:04.670 "keep_alive_timeout_ms": 10000, 00:14:04.670 "transport_retry_count": 4, 00:14:04.670 "arbitration_burst": 0, 00:14:04.670 "low_priority_weight": 0, 00:14:04.670 "medium_priority_weight": 0, 00:14:04.670 "high_priority_weight": 0, 00:14:04.670 "nvme_adminq_poll_period_us": 10000, 00:14:04.670 "nvme_ioq_poll_period_us": 0, 00:14:04.670 "io_queue_requests": 0, 00:14:04.670 "delay_cmd_submit": true, 00:14:04.670 "bdev_retry_count": 3, 00:14:04.670 "transport_ack_timeout": 0, 00:14:04.670 "ctrlr_loss_timeout_sec": 0, 00:14:04.670 "reconnect_delay_sec": 0, 00:14:04.670 "fast_io_fail_timeout_sec": 0, 00:14:04.670 "generate_uuids": false, 00:14:04.670 "transport_tos": 0, 00:14:04.670 "io_path_stat": false, 00:14:04.670 "allow_accel_sequence": false 00:14:04.670 } 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "method": "bdev_nvme_set_hotplug", 00:14:04.670 "params": { 00:14:04.670 "period_us": 100000, 00:14:04.670 "enable": false 00:14:04.670 } 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "method": "bdev_malloc_create", 00:14:04.670 "params": { 00:14:04.670 "name": "malloc0", 00:14:04.670 "num_blocks": 8192, 00:14:04.670 "block_size": 4096, 00:14:04.670 "physical_block_size": 4096, 00:14:04.670 "uuid": "a34b52d6-b0a8-4d53-a19c-1bba99cc2390", 00:14:04.670 "optimal_io_boundary": 0 00:14:04.670 } 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "method": "bdev_wait_for_examine" 00:14:04.670 } 00:14:04.670 ] 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "subsystem": "scsi", 00:14:04.670 "config": null 00:14:04.670 }, 00:14:04.670 { 00:14:04.670 "subsystem": "scheduler", 00:14:04.670 "config": [ 00:14:04.670 { 00:14:04.671 "method": "framework_set_scheduler", 00:14:04.671 "params": { 00:14:04.671 "name": "static" 00:14:04.671 } 00:14:04.671 } 00:14:04.671 ] 00:14:04.671 }, 00:14:04.671 { 00:14:04.671 "subsystem": "vhost_scsi", 00:14:04.671 "config": [] 00:14:04.671 }, 00:14:04.671 { 00:14:04.671 "subsystem": "vhost_blk", 00:14:04.671 "config": [] 00:14:04.671 }, 00:14:04.671 { 00:14:04.671 "subsystem": "ublk", 00:14:04.671 "config": [ 00:14:04.671 { 00:14:04.671 "method": "ublk_create_target", 00:14:04.671 "params": { 00:14:04.671 "cpumask": "1" 00:14:04.671 } 00:14:04.671 }, 00:14:04.671 { 00:14:04.671 "method": "ublk_start_disk", 00:14:04.671 "params": { 00:14:04.671 "bdev_name": "malloc0", 00:14:04.671 "ublk_id": 0, 00:14:04.671 "num_queues": 1, 00:14:04.671 "queue_depth": 128 00:14:04.671 } 00:14:04.671 } 00:14:04.671 ] 00:14:04.671 }, 00:14:04.671 { 00:14:04.671 "subsystem": "nbd", 00:14:04.671 "config": [] 00:14:04.671 }, 00:14:04.671 { 00:14:04.671 "subsystem": "nvmf", 00:14:04.671 "config": [ 00:14:04.671 { 00:14:04.671 "method": "nvmf_set_config", 00:14:04.671 "params": { 00:14:04.671 "discovery_filter": "match_any", 00:14:04.671 "admin_cmd_passthru": { 00:14:04.671 "identify_ctrlr": false 00:14:04.671 } 00:14:04.671 } 00:14:04.671 }, 00:14:04.671 { 00:14:04.671 "method": "nvmf_set_max_subsystems", 00:14:04.671 "params": { 00:14:04.671 "max_subsystems": 1024 00:14:04.671 } 00:14:04.671 }, 00:14:04.671 { 00:14:04.671 "method": "nvmf_set_crdt", 00:14:04.671 "params": { 00:14:04.671 "crdt1": 0, 00:14:04.671 "crdt2": 0, 00:14:04.671 "crdt3": 0 00:14:04.671 } 00:14:04.671 } 00:14:04.671 ] 00:14:04.671 }, 00:14:04.671 { 00:14:04.671 "subsystem": "iscsi", 00:14:04.671 "config": [ 00:14:04.671 { 00:14:04.671 "method": "iscsi_set_options", 00:14:04.671 "params": { 00:14:04.671 "node_base": "iqn.2016-06.io.spdk", 00:14:04.671 "max_sessions": 128, 00:14:04.671 "max_connections_per_session": 2, 00:14:04.671 "max_queue_depth": 64, 00:14:04.671 "default_time2wait": 2, 00:14:04.671 "default_time2retain": 20, 00:14:04.671 "first_burst_length": 8192, 00:14:04.671 "immediate_data": true, 00:14:04.671 "allow_duplicated_isid": false, 00:14:04.671 "error_recovery_level": 0, 00:14:04.671 "nop_timeout": 60, 00:14:04.671 "nop_in_interval": 30, 00:14:04.671 "disable_chap": false, 00:14:04.671 "require_chap": false, 00:14:04.671 "mutual_chap": false, 00:14:04.671 "chap_group": 0, 00:14:04.671 "max_large_datain_per_connection": 64, 00:14:04.671 "max_r2t_per_connection": 4, 00:14:04.671 "pdu_pool_size": 36864, 00:14:04.671 "immediate_data_pool_size": 16384, 00:14:04.671 "data_out_pool_size": 2048 00:14:04.671 } 00:14:04.671 } 00:14:04.671 ] 00:14:04.671 } 00:14:04.671 ] 00:14:04.671 }' 00:14:04.671 21:00:25 -- common/autotest_common.sh@10 -- # set +x 00:14:04.671 [2024-12-08 21:00:25.622983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:04.671 [2024-12-08 21:00:25.623177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69652 ] 00:14:04.930 [2024-12-08 21:00:25.789606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.930 [2024-12-08 21:00:25.940450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:04.930 [2024-12-08 21:00:25.940702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.867 [2024-12-08 21:00:26.640860] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:05.867 [2024-12-08 21:00:26.647220] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:05.867 [2024-12-08 21:00:26.647316] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:05.867 [2024-12-08 21:00:26.647329] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:05.867 [2024-12-08 21:00:26.647337] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.867 [2024-12-08 21:00:26.654298] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.867 [2024-12-08 21:00:26.654324] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.867 [2024-12-08 21:00:26.661209] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.867 [2024-12-08 21:00:26.661318] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:05.867 [2024-12-08 21:00:26.678216] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:06.434 21:00:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.434 21:00:27 -- common/autotest_common.sh@862 -- # return 0 00:14:06.434 21:00:27 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:06.434 21:00:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.434 21:00:27 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:06.434 21:00:27 -- common/autotest_common.sh@10 -- # set +x 00:14:06.434 21:00:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.434 21:00:27 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:06.434 21:00:27 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:06.434 21:00:27 -- ublk/ublk.sh@125 -- # killprocess 69652 00:14:06.434 21:00:27 -- common/autotest_common.sh@936 -- # '[' -z 69652 ']' 00:14:06.434 21:00:27 -- common/autotest_common.sh@940 -- # kill -0 69652 00:14:06.434 21:00:27 -- common/autotest_common.sh@941 -- # uname 00:14:06.435 21:00:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:06.435 21:00:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69652 00:14:06.435 21:00:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:06.435 21:00:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:06.435 killing process with pid 69652 00:14:06.435 21:00:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69652' 00:14:06.435 21:00:27 -- common/autotest_common.sh@955 -- # kill 69652 00:14:06.435 21:00:27 -- common/autotest_common.sh@960 -- # wait 69652 00:14:07.371 [2024-12-08 21:00:28.256991] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:07.371 [2024-12-08 21:00:28.288159] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:07.371 [2024-12-08 21:00:28.288275] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:07.371 [2024-12-08 21:00:28.296359] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:07.371 [2024-12-08 21:00:28.296444] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:07.371 [2024-12-08 21:00:28.296454] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:07.371 [2024-12-08 21:00:28.296522] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:07.371 [2024-12-08 21:00:28.296710] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:08.762 21:00:29 -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:08.762 00:14:08.762 real 0m8.872s 00:14:08.762 user 0m7.307s 00:14:08.762 sys 0m2.904s 00:14:08.762 21:00:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:08.762 ************************************ 00:14:08.762 END TEST test_save_ublk_config 00:14:08.762 21:00:29 -- common/autotest_common.sh@10 -- # set +x 00:14:08.762 ************************************ 00:14:08.762 21:00:29 -- ublk/ublk.sh@139 -- # spdk_pid=69732 00:14:08.762 21:00:29 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:08.762 21:00:29 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:08.762 21:00:29 -- ublk/ublk.sh@141 -- # waitforlisten 69732 00:14:08.762 21:00:29 -- common/autotest_common.sh@829 -- # '[' -z 69732 ']' 00:14:08.762 21:00:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.762 21:00:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.762 21:00:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.762 21:00:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.763 21:00:29 -- common/autotest_common.sh@10 -- # set +x 00:14:09.022 [2024-12-08 21:00:29.873029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:09.022 [2024-12-08 21:00:29.873187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69732 ] 00:14:09.022 [2024-12-08 21:00:30.032201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.281 [2024-12-08 21:00:30.177108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:09.281 [2024-12-08 21:00:30.177473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.281 [2024-12-08 21:00:30.177700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.850 21:00:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.850 21:00:30 -- common/autotest_common.sh@862 -- # return 0 00:14:09.850 21:00:30 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:09.850 21:00:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:09.850 21:00:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.850 21:00:30 -- common/autotest_common.sh@10 -- # set +x 00:14:09.850 ************************************ 00:14:09.850 START TEST test_create_ublk 00:14:09.850 ************************************ 00:14:09.850 21:00:30 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:14:09.850 21:00:30 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:09.850 21:00:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.850 21:00:30 -- common/autotest_common.sh@10 -- # set +x 00:14:09.850 [2024-12-08 21:00:30.818416] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:09.850 21:00:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.850 21:00:30 -- ublk/ublk.sh@33 -- # ublk_target= 00:14:09.850 21:00:30 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:09.850 21:00:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.850 21:00:30 -- common/autotest_common.sh@10 -- # set +x 00:14:10.108 21:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.108 21:00:31 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:10.108 21:00:31 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:10.108 21:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.108 21:00:31 -- common/autotest_common.sh@10 -- # set +x 00:14:10.108 [2024-12-08 21:00:31.037368] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:10.108 [2024-12-08 21:00:31.037884] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:10.108 [2024-12-08 21:00:31.037906] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:10.108 [2024-12-08 21:00:31.037919] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:10.109 [2024-12-08 21:00:31.046368] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:10.109 [2024-12-08 21:00:31.046400] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:10.109 [2024-12-08 21:00:31.053205] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:10.109 [2024-12-08 21:00:31.063338] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:10.109 [2024-12-08 21:00:31.081148] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:10.109 21:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.109 21:00:31 -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:10.109 21:00:31 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:10.109 21:00:31 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:10.109 21:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.109 21:00:31 -- common/autotest_common.sh@10 -- # set +x 00:14:10.109 21:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.109 21:00:31 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:10.109 { 00:14:10.109 "ublk_device": "/dev/ublkb0", 00:14:10.109 "id": 0, 00:14:10.109 "queue_depth": 512, 00:14:10.109 "num_queues": 4, 00:14:10.109 "bdev_name": "Malloc0" 00:14:10.109 } 00:14:10.109 ]' 00:14:10.109 21:00:31 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:10.368 21:00:31 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:10.368 21:00:31 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:10.368 21:00:31 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:10.368 21:00:31 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:10.368 21:00:31 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:10.368 21:00:31 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:10.368 21:00:31 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:10.368 21:00:31 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:10.368 21:00:31 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:10.368 21:00:31 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:10.368 21:00:31 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:10.368 21:00:31 -- lvol/common.sh@41 -- # local offset=0 00:14:10.368 21:00:31 -- lvol/common.sh@42 -- # local size=134217728 00:14:10.368 21:00:31 -- lvol/common.sh@43 -- # local rw=write 00:14:10.368 21:00:31 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:10.368 21:00:31 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:10.368 21:00:31 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:10.368 21:00:31 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:10.368 21:00:31 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:10.368 21:00:31 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:10.368 21:00:31 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:10.627 fio: verification read phase will never start because write phase uses all of runtime 00:14:10.627 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:10.627 fio-3.35 00:14:10.627 Starting 1 process 00:14:20.608 00:14:20.608 fio_test: (groupid=0, jobs=1): err= 0: pid=69782: Sun Dec 8 21:00:41 2024 00:14:20.608 write: IOPS=13.4k, BW=52.2MiB/s (54.8MB/s)(522MiB/10002msec); 0 zone resets 00:14:20.608 clat (usec): min=44, max=4030, avg=73.63, stdev=121.99 00:14:20.608 lat (usec): min=44, max=4031, avg=74.26, stdev=122.01 00:14:20.608 clat percentiles (usec): 00:14:20.608 | 1.00th=[ 50], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 62], 00:14:20.608 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 64], 60.00th=[ 65], 00:14:20.608 | 70.00th=[ 67], 80.00th=[ 73], 90.00th=[ 83], 95.00th=[ 90], 00:14:20.608 | 99.00th=[ 111], 99.50th=[ 123], 99.90th=[ 2573], 99.95th=[ 3097], 00:14:20.608 | 99.99th=[ 3720] 00:14:20.608 bw ( KiB/s): min=51944, max=57552, per=100.00%, avg=53567.58, stdev=1184.23, samples=19 00:14:20.608 iops : min=12986, max=14388, avg=13391.79, stdev=296.02, samples=19 00:14:20.608 lat (usec) : 50=0.96%, 100=96.73%, 250=2.00%, 500=0.01%, 750=0.01% 00:14:20.608 lat (usec) : 1000=0.02% 00:14:20.608 lat (msec) : 2=0.11%, 4=0.15%, 10=0.01% 00:14:20.608 cpu : usr=2.73%, sys=7.79%, ctx=133762, majf=0, minf=794 00:14:20.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.608 issued rwts: total=0,133758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.608 00:14:20.608 Run status group 0 (all jobs): 00:14:20.608 WRITE: bw=52.2MiB/s (54.8MB/s), 52.2MiB/s-52.2MiB/s (54.8MB/s-54.8MB/s), io=522MiB (548MB), run=10002-10002msec 00:14:20.608 00:14:20.608 Disk stats (read/write): 00:14:20.608 ublkb0: ios=0/132371, merge=0/0, ticks=0/8942, in_queue=8943, util=99.10% 00:14:20.608 21:00:41 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:20.608 21:00:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.608 21:00:41 -- common/autotest_common.sh@10 -- # set +x 00:14:20.608 [2024-12-08 21:00:41.611651] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:20.608 [2024-12-08 21:00:41.647682] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:20.608 [2024-12-08 21:00:41.648778] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:20.866 [2024-12-08 21:00:41.656215] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:20.866 [2024-12-08 21:00:41.656607] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:20.866 [2024-12-08 21:00:41.656641] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:20.866 21:00:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.866 21:00:41 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:20.866 21:00:41 -- common/autotest_common.sh@650 -- # local es=0 00:14:20.866 21:00:41 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:20.866 21:00:41 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:20.866 21:00:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.866 21:00:41 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:20.866 21:00:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.866 21:00:41 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:20.866 21:00:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.866 21:00:41 -- common/autotest_common.sh@10 -- # set +x 00:14:20.866 [2024-12-08 21:00:41.671258] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:20.866 request: 00:14:20.866 { 00:14:20.866 "ublk_id": 0, 00:14:20.866 "method": "ublk_stop_disk", 00:14:20.866 "req_id": 1 00:14:20.866 } 00:14:20.866 Got JSON-RPC error response 00:14:20.866 response: 00:14:20.866 { 00:14:20.866 "code": -19, 00:14:20.866 "message": "No such device" 00:14:20.866 } 00:14:20.866 21:00:41 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:20.866 21:00:41 -- common/autotest_common.sh@653 -- # es=1 00:14:20.866 21:00:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.866 21:00:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.866 21:00:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.866 21:00:41 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:20.866 21:00:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.866 21:00:41 -- common/autotest_common.sh@10 -- # set +x 00:14:20.866 [2024-12-08 21:00:41.685261] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:20.866 [2024-12-08 21:00:41.693199] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:20.866 [2024-12-08 21:00:41.693250] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:20.866 21:00:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.866 21:00:41 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:20.866 21:00:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.866 21:00:41 -- common/autotest_common.sh@10 -- # set +x 00:14:21.432 21:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.432 21:00:42 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:21.432 21:00:42 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:21.432 21:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.432 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.432 21:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.432 21:00:42 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:21.432 21:00:42 -- lvol/common.sh@26 -- # jq length 00:14:21.432 21:00:42 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:21.432 21:00:42 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:21.432 21:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.432 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.432 21:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.432 21:00:42 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:21.432 21:00:42 -- lvol/common.sh@28 -- # jq length 00:14:21.432 21:00:42 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:21.432 00:14:21.432 real 0m11.514s 00:14:21.432 user 0m0.744s 00:14:21.432 sys 0m0.872s 00:14:21.432 21:00:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:21.432 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.432 ************************************ 00:14:21.432 END TEST test_create_ublk 00:14:21.432 ************************************ 00:14:21.432 21:00:42 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:21.432 21:00:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:21.432 21:00:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:21.432 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.432 ************************************ 00:14:21.432 START TEST test_create_multi_ublk 00:14:21.432 ************************************ 00:14:21.432 21:00:42 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:14:21.432 21:00:42 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:21.432 21:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.432 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.432 [2024-12-08 21:00:42.383132] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:21.432 21:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.432 21:00:42 -- ublk/ublk.sh@62 -- # ublk_target= 00:14:21.432 21:00:42 -- ublk/ublk.sh@64 -- # seq 0 3 00:14:21.432 21:00:42 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.432 21:00:42 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:21.432 21:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.432 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.693 21:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.693 21:00:42 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:21.693 21:00:42 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:21.693 21:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.693 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.693 [2024-12-08 21:00:42.674298] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:21.693 [2024-12-08 21:00:42.674811] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:21.693 [2024-12-08 21:00:42.674832] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:21.693 [2024-12-08 21:00:42.674844] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.693 [2024-12-08 21:00:42.685203] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.693 [2024-12-08 21:00:42.685243] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.693 [2024-12-08 21:00:42.692287] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.693 [2024-12-08 21:00:42.693128] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:21.693 [2024-12-08 21:00:42.717290] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.693 21:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.693 21:00:42 -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:21.693 21:00:42 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.693 21:00:42 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:21.693 21:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.693 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.972 21:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.972 21:00:42 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:21.972 21:00:42 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:21.972 21:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.972 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:21.972 [2024-12-08 21:00:42.928279] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:21.972 [2024-12-08 21:00:42.928861] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:21.972 [2024-12-08 21:00:42.928908] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:21.972 [2024-12-08 21:00:42.928917] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.972 [2024-12-08 21:00:42.937371] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.972 [2024-12-08 21:00:42.937397] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.972 [2024-12-08 21:00:42.944172] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.972 [2024-12-08 21:00:42.945024] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:21.972 [2024-12-08 21:00:42.953267] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.972 21:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.972 21:00:42 -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:21.972 21:00:42 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.972 21:00:42 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:21.972 21:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.972 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:22.246 21:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.246 21:00:43 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:22.246 21:00:43 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:22.246 21:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.246 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:14:22.246 [2024-12-08 21:00:43.188316] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:22.246 [2024-12-08 21:00:43.188873] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:22.246 [2024-12-08 21:00:43.188895] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:22.246 [2024-12-08 21:00:43.188910] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:22.246 [2024-12-08 21:00:43.196302] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:22.247 [2024-12-08 21:00:43.196334] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:22.247 [2024-12-08 21:00:43.204159] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:22.247 [2024-12-08 21:00:43.205016] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:22.247 [2024-12-08 21:00:43.213179] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:22.247 21:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.247 21:00:43 -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:22.247 21:00:43 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.247 21:00:43 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:22.247 21:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.247 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:14:22.506 21:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.506 21:00:43 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:22.506 21:00:43 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:22.506 21:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.506 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:14:22.506 [2024-12-08 21:00:43.429325] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:22.506 [2024-12-08 21:00:43.429830] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:22.506 [2024-12-08 21:00:43.429855] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:22.506 [2024-12-08 21:00:43.429865] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:22.506 [2024-12-08 21:00:43.437242] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:22.506 [2024-12-08 21:00:43.437266] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:22.506 [2024-12-08 21:00:43.444139] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:22.506 [2024-12-08 21:00:43.444959] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:22.506 [2024-12-08 21:00:43.456223] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:22.506 21:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.506 21:00:43 -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:22.506 21:00:43 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:22.506 21:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.506 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:14:22.506 21:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.506 21:00:43 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:22.506 { 00:14:22.506 "ublk_device": "/dev/ublkb0", 00:14:22.507 "id": 0, 00:14:22.507 "queue_depth": 512, 00:14:22.507 "num_queues": 4, 00:14:22.507 "bdev_name": "Malloc0" 00:14:22.507 }, 00:14:22.507 { 00:14:22.507 "ublk_device": "/dev/ublkb1", 00:14:22.507 "id": 1, 00:14:22.507 "queue_depth": 512, 00:14:22.507 "num_queues": 4, 00:14:22.507 "bdev_name": "Malloc1" 00:14:22.507 }, 00:14:22.507 { 00:14:22.507 "ublk_device": "/dev/ublkb2", 00:14:22.507 "id": 2, 00:14:22.507 "queue_depth": 512, 00:14:22.507 "num_queues": 4, 00:14:22.507 "bdev_name": "Malloc2" 00:14:22.507 }, 00:14:22.507 { 00:14:22.507 "ublk_device": "/dev/ublkb3", 00:14:22.507 "id": 3, 00:14:22.507 "queue_depth": 512, 00:14:22.507 "num_queues": 4, 00:14:22.507 "bdev_name": "Malloc3" 00:14:22.507 } 00:14:22.507 ]' 00:14:22.507 21:00:43 -- ublk/ublk.sh@72 -- # seq 0 3 00:14:22.507 21:00:43 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.507 21:00:43 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:22.507 21:00:43 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:22.507 21:00:43 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:22.766 21:00:43 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:22.766 21:00:43 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:22.766 21:00:43 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.766 21:00:43 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:22.766 21:00:43 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.766 21:00:43 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:22.766 21:00:43 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:22.766 21:00:43 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.766 21:00:43 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:22.766 21:00:43 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:22.766 21:00:43 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:23.026 21:00:43 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:23.026 21:00:43 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:23.026 21:00:43 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:23.026 21:00:43 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:23.026 21:00:43 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:23.026 21:00:43 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:23.026 21:00:44 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:23.026 21:00:44 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.026 21:00:44 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:23.026 21:00:44 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:23.026 21:00:44 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:23.285 21:00:44 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:23.285 21:00:44 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:23.285 21:00:44 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:23.285 21:00:44 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:23.285 21:00:44 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:23.285 21:00:44 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:23.285 21:00:44 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:23.285 21:00:44 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.285 21:00:44 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:23.285 21:00:44 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:23.285 21:00:44 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:23.545 21:00:44 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:23.545 21:00:44 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:23.545 21:00:44 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:23.545 21:00:44 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:23.545 21:00:44 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:23.545 21:00:44 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:23.545 21:00:44 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:23.545 21:00:44 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:23.545 21:00:44 -- ublk/ublk.sh@85 -- # seq 0 3 00:14:23.545 21:00:44 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.545 21:00:44 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:23.545 21:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.545 21:00:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.545 [2024-12-08 21:00:44.532326] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:23.545 [2024-12-08 21:00:44.565460] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:23.545 [2024-12-08 21:00:44.566661] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:23.545 [2024-12-08 21:00:44.581253] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:23.545 [2024-12-08 21:00:44.581670] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:23.545 [2024-12-08 21:00:44.581698] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:23.545 21:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.545 21:00:44 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.545 21:00:44 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:23.545 21:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.545 21:00:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.545 [2024-12-08 21:00:44.585358] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:23.804 [2024-12-08 21:00:44.628525] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:23.804 [2024-12-08 21:00:44.629635] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:23.804 [2024-12-08 21:00:44.636174] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:23.804 [2024-12-08 21:00:44.636543] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:23.804 [2024-12-08 21:00:44.636567] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:23.804 21:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.804 21:00:44 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.804 21:00:44 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:23.804 21:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.804 21:00:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.804 [2024-12-08 21:00:44.649288] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:23.804 [2024-12-08 21:00:44.690158] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:23.804 [2024-12-08 21:00:44.690637] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:23.804 [2024-12-08 21:00:44.696185] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:23.804 [2024-12-08 21:00:44.696579] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:23.804 [2024-12-08 21:00:44.696610] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:23.804 21:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.804 21:00:44 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.804 21:00:44 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:23.804 21:00:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.804 21:00:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.804 [2024-12-08 21:00:44.704257] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:23.804 [2024-12-08 21:00:44.744164] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:23.804 [2024-12-08 21:00:44.745050] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:23.804 [2024-12-08 21:00:44.752117] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:23.804 [2024-12-08 21:00:44.752415] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:23.804 [2024-12-08 21:00:44.752440] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:23.804 21:00:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.804 21:00:44 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:24.064 [2024-12-08 21:00:45.037243] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:24.064 [2024-12-08 21:00:45.045168] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:24.064 [2024-12-08 21:00:45.045221] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:24.064 21:00:45 -- ublk/ublk.sh@93 -- # seq 0 3 00:14:24.064 21:00:45 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.064 21:00:45 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:24.064 21:00:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.064 21:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:24.637 21:00:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.637 21:00:45 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.637 21:00:45 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:24.637 21:00:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.637 21:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:24.896 21:00:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.896 21:00:45 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.896 21:00:45 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:24.896 21:00:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.896 21:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:25.155 21:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.155 21:00:46 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:25.155 21:00:46 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:25.155 21:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.155 21:00:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.414 21:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.414 21:00:46 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:25.414 21:00:46 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:25.414 21:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.414 21:00:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.414 21:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.414 21:00:46 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:25.414 21:00:46 -- lvol/common.sh@26 -- # jq length 00:14:25.673 21:00:46 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:25.673 21:00:46 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:25.673 21:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.673 21:00:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.673 21:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.673 21:00:46 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:25.673 21:00:46 -- lvol/common.sh@28 -- # jq length 00:14:25.673 21:00:46 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:25.673 00:14:25.673 real 0m4.153s 00:14:25.673 user 0m1.338s 00:14:25.673 sys 0m0.166s 00:14:25.673 21:00:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:25.673 ************************************ 00:14:25.673 END TEST test_create_multi_ublk 00:14:25.673 21:00:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.673 ************************************ 00:14:25.673 21:00:46 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:25.673 21:00:46 -- ublk/ublk.sh@147 -- # cleanup 00:14:25.673 21:00:46 -- ublk/ublk.sh@130 -- # killprocess 69732 00:14:25.673 21:00:46 -- common/autotest_common.sh@936 -- # '[' -z 69732 ']' 00:14:25.673 21:00:46 -- common/autotest_common.sh@940 -- # kill -0 69732 00:14:25.673 21:00:46 -- common/autotest_common.sh@941 -- # uname 00:14:25.673 21:00:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:25.673 21:00:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69732 00:14:25.673 21:00:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:25.673 21:00:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:25.673 killing process with pid 69732 00:14:25.673 21:00:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69732' 00:14:25.673 21:00:46 -- common/autotest_common.sh@955 -- # kill 69732 00:14:25.673 21:00:46 -- common/autotest_common.sh@960 -- # wait 69732 00:14:26.611 [2024-12-08 21:00:47.356984] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:26.611 [2024-12-08 21:00:47.357062] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:27.550 00:14:27.550 real 0m27.623s 00:14:27.550 user 0m40.585s 00:14:27.550 sys 0m9.589s 00:14:27.550 ************************************ 00:14:27.550 END TEST ublk 00:14:27.550 ************************************ 00:14:27.550 21:00:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:27.550 21:00:48 -- common/autotest_common.sh@10 -- # set +x 00:14:27.550 21:00:48 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:27.550 21:00:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:27.550 21:00:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:27.550 21:00:48 -- common/autotest_common.sh@10 -- # set +x 00:14:27.550 ************************************ 00:14:27.550 START TEST ublk_recovery 00:14:27.550 ************************************ 00:14:27.550 21:00:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:27.550 * Looking for test storage... 00:14:27.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:27.550 21:00:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:27.550 21:00:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:27.550 21:00:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:27.550 21:00:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:27.550 21:00:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:27.550 21:00:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:27.550 21:00:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:27.550 21:00:48 -- scripts/common.sh@335 -- # IFS=.-: 00:14:27.550 21:00:48 -- scripts/common.sh@335 -- # read -ra ver1 00:14:27.550 21:00:48 -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.550 21:00:48 -- scripts/common.sh@336 -- # read -ra ver2 00:14:27.550 21:00:48 -- scripts/common.sh@337 -- # local 'op=<' 00:14:27.550 21:00:48 -- scripts/common.sh@339 -- # ver1_l=2 00:14:27.550 21:00:48 -- scripts/common.sh@340 -- # ver2_l=1 00:14:27.550 21:00:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:27.550 21:00:48 -- scripts/common.sh@343 -- # case "$op" in 00:14:27.550 21:00:48 -- scripts/common.sh@344 -- # : 1 00:14:27.550 21:00:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:27.550 21:00:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.550 21:00:48 -- scripts/common.sh@364 -- # decimal 1 00:14:27.550 21:00:48 -- scripts/common.sh@352 -- # local d=1 00:14:27.550 21:00:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.550 21:00:48 -- scripts/common.sh@354 -- # echo 1 00:14:27.550 21:00:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:27.550 21:00:48 -- scripts/common.sh@365 -- # decimal 2 00:14:27.550 21:00:48 -- scripts/common.sh@352 -- # local d=2 00:14:27.550 21:00:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.550 21:00:48 -- scripts/common.sh@354 -- # echo 2 00:14:27.550 21:00:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:27.550 21:00:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:27.550 21:00:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:27.550 21:00:48 -- scripts/common.sh@367 -- # return 0 00:14:27.550 21:00:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.550 21:00:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:27.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.550 --rc genhtml_branch_coverage=1 00:14:27.550 --rc genhtml_function_coverage=1 00:14:27.550 --rc genhtml_legend=1 00:14:27.550 --rc geninfo_all_blocks=1 00:14:27.550 --rc geninfo_unexecuted_blocks=1 00:14:27.550 00:14:27.550 ' 00:14:27.550 21:00:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:27.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.550 --rc genhtml_branch_coverage=1 00:14:27.550 --rc genhtml_function_coverage=1 00:14:27.550 --rc genhtml_legend=1 00:14:27.550 --rc geninfo_all_blocks=1 00:14:27.550 --rc geninfo_unexecuted_blocks=1 00:14:27.550 00:14:27.550 ' 00:14:27.550 21:00:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:27.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.550 --rc genhtml_branch_coverage=1 00:14:27.550 --rc genhtml_function_coverage=1 00:14:27.550 --rc genhtml_legend=1 00:14:27.550 --rc geninfo_all_blocks=1 00:14:27.550 --rc geninfo_unexecuted_blocks=1 00:14:27.550 00:14:27.550 ' 00:14:27.550 21:00:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:27.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.550 --rc genhtml_branch_coverage=1 00:14:27.550 --rc genhtml_function_coverage=1 00:14:27.550 --rc genhtml_legend=1 00:14:27.550 --rc geninfo_all_blocks=1 00:14:27.550 --rc geninfo_unexecuted_blocks=1 00:14:27.550 00:14:27.550 ' 00:14:27.550 21:00:48 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:27.550 21:00:48 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:27.550 21:00:48 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:27.550 21:00:48 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:27.550 21:00:48 -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:27.550 21:00:48 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:27.550 21:00:48 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:27.550 21:00:48 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:27.550 21:00:48 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:27.550 21:00:48 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:27.550 21:00:48 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=70145 00:14:27.550 21:00:48 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:27.550 21:00:48 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 70145 00:14:27.550 21:00:48 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:27.550 21:00:48 -- common/autotest_common.sh@829 -- # '[' -z 70145 ']' 00:14:27.550 21:00:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.550 21:00:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.550 21:00:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.550 21:00:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.550 21:00:48 -- common/autotest_common.sh@10 -- # set +x 00:14:27.550 [2024-12-08 21:00:48.583758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:27.550 [2024-12-08 21:00:48.583932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70145 ] 00:14:27.810 [2024-12-08 21:00:48.756181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:28.070 [2024-12-08 21:00:48.900136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:28.070 [2024-12-08 21:00:48.900484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.070 [2024-12-08 21:00:48.900593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.639 21:00:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.639 21:00:49 -- common/autotest_common.sh@862 -- # return 0 00:14:28.639 21:00:49 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:28.639 21:00:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.639 21:00:49 -- common/autotest_common.sh@10 -- # set +x 00:14:28.639 [2024-12-08 21:00:49.511300] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:28.639 21:00:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.639 21:00:49 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:28.639 21:00:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.639 21:00:49 -- common/autotest_common.sh@10 -- # set +x 00:14:28.639 malloc0 00:14:28.639 21:00:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.639 21:00:49 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:28.639 21:00:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.639 21:00:49 -- common/autotest_common.sh@10 -- # set +x 00:14:28.639 [2024-12-08 21:00:49.621483] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:28.639 [2024-12-08 21:00:49.621604] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:28.639 [2024-12-08 21:00:49.621618] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:28.639 [2024-12-08 21:00:49.621630] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.639 [2024-12-08 21:00:49.629450] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.639 [2024-12-08 21:00:49.629496] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.639 [2024-12-08 21:00:49.640131] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.639 [2024-12-08 21:00:49.640310] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:28.639 [2024-12-08 21:00:49.655157] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.639 1 00:14:28.639 21:00:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.639 21:00:49 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:30.017 21:00:50 -- ublk/ublk_recovery.sh@31 -- # fio_proc=70179 00:14:30.017 21:00:50 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:30.017 21:00:50 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:30.017 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:30.017 fio-3.35 00:14:30.017 Starting 1 process 00:14:35.291 21:00:55 -- ublk/ublk_recovery.sh@36 -- # kill -9 70145 00:14:35.291 21:00:55 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:40.562 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 70145 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:40.562 21:01:00 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=70286 00:14:40.562 21:01:00 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:40.562 21:01:00 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.562 21:01:00 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 70286 00:14:40.562 21:01:00 -- common/autotest_common.sh@829 -- # '[' -z 70286 ']' 00:14:40.562 21:01:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.562 21:01:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.562 21:01:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.562 21:01:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.562 21:01:00 -- common/autotest_common.sh@10 -- # set +x 00:14:40.562 [2024-12-08 21:01:00.800168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:40.562 [2024-12-08 21:01:00.800338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70286 ] 00:14:40.562 [2024-12-08 21:01:00.962126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:40.562 [2024-12-08 21:01:01.105179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:40.562 [2024-12-08 21:01:01.105589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.562 [2024-12-08 21:01:01.105615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.499 21:01:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.499 21:01:02 -- common/autotest_common.sh@862 -- # return 0 00:14:41.499 21:01:02 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:41.499 21:01:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.499 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:41.499 [2024-12-08 21:01:02.370518] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:41.499 21:01:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.499 21:01:02 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:41.499 21:01:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.499 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:41.499 malloc0 00:14:41.499 21:01:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.499 21:01:02 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:41.499 21:01:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.499 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:41.499 [2024-12-08 21:01:02.480289] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:41.499 [2024-12-08 21:01:02.480343] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:41.499 [2024-12-08 21:01:02.480371] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:41.499 [2024-12-08 21:01:02.488198] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:41.499 [2024-12-08 21:01:02.488221] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:41.499 [2024-12-08 21:01:02.488308] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:41.499 1 00:14:41.499 21:01:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.499 21:01:02 -- ublk/ublk_recovery.sh@52 -- # wait 70179 00:14:41.499 [2024-12-08 21:01:02.496152] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:41.499 [2024-12-08 21:01:02.499700] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:41.499 [2024-12-08 21:01:02.505196] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:41.499 [2024-12-08 21:01:02.505240] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:37.749 00:15:37.749 fio_test: (groupid=0, jobs=1): err= 0: pid=70183: Sun Dec 8 21:01:50 2024 00:15:37.749 read: IOPS=21.3k, BW=83.3MiB/s (87.4MB/s)(5001MiB/60001msec) 00:15:37.749 slat (usec): min=2, max=259, avg= 5.68, stdev= 2.85 00:15:37.749 clat (usec): min=817, max=6845.7k, avg=2966.61, stdev=49865.22 00:15:37.749 lat (usec): min=823, max=6845.7k, avg=2972.29, stdev=49865.22 00:15:37.749 clat percentiles (usec): 00:15:37.749 | 1.00th=[ 2147], 5.00th=[ 2278], 10.00th=[ 2311], 20.00th=[ 2343], 00:15:37.749 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:15:37.749 | 70.00th=[ 2540], 80.00th=[ 2606], 90.00th=[ 2835], 95.00th=[ 3687], 00:15:37.749 | 99.00th=[ 5800], 99.50th=[ 6194], 99.90th=[ 7570], 99.95th=[ 8717], 00:15:37.749 | 99.99th=[12911] 00:15:37.749 bw ( KiB/s): min=37200, max=102864, per=100.00%, avg=95783.25, stdev=10890.66, samples=106 00:15:37.749 iops : min= 9300, max=25716, avg=23945.78, stdev=2722.67, samples=106 00:15:37.749 write: IOPS=21.3k, BW=83.3MiB/s (87.3MB/s)(4997MiB/60001msec); 0 zone resets 00:15:37.749 slat (usec): min=2, max=1048, avg= 5.94, stdev= 3.08 00:15:37.749 clat (usec): min=809, max=6845.7k, avg=3021.83, stdev=46861.53 00:15:37.749 lat (usec): min=814, max=6845.7k, avg=3027.77, stdev=46861.53 00:15:37.749 clat percentiles (usec): 00:15:37.749 | 1.00th=[ 2180], 5.00th=[ 2376], 10.00th=[ 2409], 20.00th=[ 2474], 00:15:37.749 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:15:37.749 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2900], 95.00th=[ 3589], 00:15:37.749 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7570], 99.95th=[ 8717], 00:15:37.749 | 99.99th=[12911] 00:15:37.749 bw ( KiB/s): min=38056, max=102744, per=100.00%, avg=95714.27, stdev=10690.99, samples=106 00:15:37.749 iops : min= 9514, max=25686, avg=23928.54, stdev=2672.75, samples=106 00:15:37.749 lat (usec) : 1000=0.01% 00:15:37.749 lat (msec) : 2=0.41%, 4=95.58%, 10=3.98%, 20=0.03%, >=2000=0.01% 00:15:37.749 cpu : usr=10.83%, sys=23.00%, ctx=73561, majf=0, minf=14 00:15:37.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:37.749 issued rwts: total=1280235,1279121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:37.749 00:15:37.749 Run status group 0 (all jobs): 00:15:37.749 READ: bw=83.3MiB/s (87.4MB/s), 83.3MiB/s-83.3MiB/s (87.4MB/s-87.4MB/s), io=5001MiB (5244MB), run=60001-60001msec 00:15:37.749 WRITE: bw=83.3MiB/s (87.3MB/s), 83.3MiB/s-83.3MiB/s (87.3MB/s-87.3MB/s), io=4997MiB (5239MB), run=60001-60001msec 00:15:37.749 00:15:37.749 Disk stats (read/write): 00:15:37.749 ublkb1: ios=1277392/1276257, merge=0/0, ticks=3681206/3618498, in_queue=7299704, util=99.94% 00:15:37.749 21:01:50 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:37.749 21:01:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.749 21:01:50 -- common/autotest_common.sh@10 -- # set +x 00:15:37.749 [2024-12-08 21:01:50.932346] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:37.749 [2024-12-08 21:01:50.987145] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:37.749 [2024-12-08 21:01:50.987342] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:37.749 [2024-12-08 21:01:50.998173] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:37.749 [2024-12-08 21:01:50.998304] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:37.749 [2024-12-08 21:01:50.998321] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:37.749 21:01:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.749 21:01:51 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:37.749 21:01:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.749 21:01:51 -- common/autotest_common.sh@10 -- # set +x 00:15:37.749 [2024-12-08 21:01:51.008285] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:37.749 [2024-12-08 21:01:51.014007] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:37.749 [2024-12-08 21:01:51.014058] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:37.749 21:01:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.749 21:01:51 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:37.749 21:01:51 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:37.749 21:01:51 -- ublk/ublk_recovery.sh@14 -- # killprocess 70286 00:15:37.749 21:01:51 -- common/autotest_common.sh@936 -- # '[' -z 70286 ']' 00:15:37.749 21:01:51 -- common/autotest_common.sh@940 -- # kill -0 70286 00:15:37.749 21:01:51 -- common/autotest_common.sh@941 -- # uname 00:15:37.749 21:01:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.749 21:01:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70286 00:15:37.749 21:01:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:37.749 21:01:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:37.749 killing process with pid 70286 00:15:37.749 21:01:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70286' 00:15:37.749 21:01:51 -- common/autotest_common.sh@955 -- # kill 70286 00:15:37.749 21:01:51 -- common/autotest_common.sh@960 -- # wait 70286 00:15:37.749 [2024-12-08 21:01:52.288920] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:37.749 [2024-12-08 21:01:52.288994] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:37.749 ************************************ 00:15:37.749 END TEST ublk_recovery 00:15:37.749 ************************************ 00:15:37.749 00:15:37.749 real 1m4.980s 00:15:37.749 user 1m46.975s 00:15:37.749 sys 0m32.523s 00:15:37.749 21:01:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:37.749 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:15:37.749 21:01:53 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@255 -- # timing_exit lib 00:15:37.749 21:01:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:37.749 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:15:37.749 21:01:53 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:15:37.749 21:01:53 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:37.750 21:01:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:37.750 21:01:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.750 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:15:37.750 ************************************ 00:15:37.750 START TEST ftl 00:15:37.750 ************************************ 00:15:37.750 21:01:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:37.750 * Looking for test storage... 00:15:37.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:37.750 21:01:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:37.750 21:01:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:37.750 21:01:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:37.750 21:01:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:37.750 21:01:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:37.750 21:01:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:37.750 21:01:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:37.750 21:01:53 -- scripts/common.sh@335 -- # IFS=.-: 00:15:37.750 21:01:53 -- scripts/common.sh@335 -- # read -ra ver1 00:15:37.750 21:01:53 -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.750 21:01:53 -- scripts/common.sh@336 -- # read -ra ver2 00:15:37.750 21:01:53 -- scripts/common.sh@337 -- # local 'op=<' 00:15:37.750 21:01:53 -- scripts/common.sh@339 -- # ver1_l=2 00:15:37.750 21:01:53 -- scripts/common.sh@340 -- # ver2_l=1 00:15:37.750 21:01:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:37.750 21:01:53 -- scripts/common.sh@343 -- # case "$op" in 00:15:37.750 21:01:53 -- scripts/common.sh@344 -- # : 1 00:15:37.750 21:01:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:37.750 21:01:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.750 21:01:53 -- scripts/common.sh@364 -- # decimal 1 00:15:37.750 21:01:53 -- scripts/common.sh@352 -- # local d=1 00:15:37.750 21:01:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.750 21:01:53 -- scripts/common.sh@354 -- # echo 1 00:15:37.750 21:01:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:37.750 21:01:53 -- scripts/common.sh@365 -- # decimal 2 00:15:37.750 21:01:53 -- scripts/common.sh@352 -- # local d=2 00:15:37.750 21:01:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.750 21:01:53 -- scripts/common.sh@354 -- # echo 2 00:15:37.750 21:01:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:37.750 21:01:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:37.750 21:01:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:37.750 21:01:53 -- scripts/common.sh@367 -- # return 0 00:15:37.750 21:01:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.750 21:01:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:37.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.750 --rc genhtml_branch_coverage=1 00:15:37.750 --rc genhtml_function_coverage=1 00:15:37.750 --rc genhtml_legend=1 00:15:37.750 --rc geninfo_all_blocks=1 00:15:37.750 --rc geninfo_unexecuted_blocks=1 00:15:37.750 00:15:37.750 ' 00:15:37.750 21:01:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:37.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.750 --rc genhtml_branch_coverage=1 00:15:37.750 --rc genhtml_function_coverage=1 00:15:37.750 --rc genhtml_legend=1 00:15:37.750 --rc geninfo_all_blocks=1 00:15:37.750 --rc geninfo_unexecuted_blocks=1 00:15:37.750 00:15:37.750 ' 00:15:37.750 21:01:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:37.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.750 --rc genhtml_branch_coverage=1 00:15:37.750 --rc genhtml_function_coverage=1 00:15:37.750 --rc genhtml_legend=1 00:15:37.750 --rc geninfo_all_blocks=1 00:15:37.750 --rc geninfo_unexecuted_blocks=1 00:15:37.750 00:15:37.750 ' 00:15:37.750 21:01:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:37.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.750 --rc genhtml_branch_coverage=1 00:15:37.750 --rc genhtml_function_coverage=1 00:15:37.750 --rc genhtml_legend=1 00:15:37.750 --rc geninfo_all_blocks=1 00:15:37.750 --rc geninfo_unexecuted_blocks=1 00:15:37.750 00:15:37.750 ' 00:15:37.750 21:01:53 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:37.750 21:01:53 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:37.750 21:01:53 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:37.750 21:01:53 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:37.750 21:01:53 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:37.750 21:01:53 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:37.750 21:01:53 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.750 21:01:53 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:37.750 21:01:53 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:37.750 21:01:53 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.750 21:01:53 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.750 21:01:53 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:37.750 21:01:53 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:37.750 21:01:53 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:37.750 21:01:53 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:37.750 21:01:53 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:37.750 21:01:53 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:37.750 21:01:53 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.750 21:01:53 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.750 21:01:53 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:37.750 21:01:53 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:37.750 21:01:53 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:37.750 21:01:53 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:37.750 21:01:53 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:37.750 21:01:53 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:37.750 21:01:53 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:37.750 21:01:53 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:37.750 21:01:53 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.750 21:01:53 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.750 21:01:53 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.750 21:01:53 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:37.750 21:01:53 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:37.750 21:01:53 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:37.750 21:01:53 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:37.750 21:01:53 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:37.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:37.750 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:37.750 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:37.750 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:37.750 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:37.750 21:01:54 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=71091 00:15:37.750 21:01:54 -- ftl/ftl.sh@38 -- # waitforlisten 71091 00:15:37.750 21:01:54 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:37.750 21:01:54 -- common/autotest_common.sh@829 -- # '[' -z 71091 ']' 00:15:37.750 21:01:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.750 21:01:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.750 21:01:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.750 21:01:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.750 21:01:54 -- common/autotest_common.sh@10 -- # set +x 00:15:37.750 [2024-12-08 21:01:54.216475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:37.750 [2024-12-08 21:01:54.217280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71091 ] 00:15:37.750 [2024-12-08 21:01:54.395205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.750 [2024-12-08 21:01:54.618783] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:37.750 [2024-12-08 21:01:54.619057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.750 21:01:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.750 21:01:55 -- common/autotest_common.sh@862 -- # return 0 00:15:37.750 21:01:55 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:37.750 21:01:55 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:37.750 21:01:56 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:37.750 21:01:56 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:37.750 21:01:56 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:37.750 21:01:56 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:37.750 21:01:56 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:37.750 21:01:56 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:15:37.750 21:01:56 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:37.750 21:01:56 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:15:37.750 21:01:56 -- ftl/ftl.sh@50 -- # break 00:15:37.750 21:01:56 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:15:37.750 21:01:56 -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:37.750 21:01:56 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:37.750 21:01:56 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:37.750 21:01:57 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:15:37.750 21:01:57 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:37.751 21:01:57 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:15:37.751 21:01:57 -- ftl/ftl.sh@63 -- # break 00:15:37.751 21:01:57 -- ftl/ftl.sh@66 -- # killprocess 71091 00:15:37.751 21:01:57 -- common/autotest_common.sh@936 -- # '[' -z 71091 ']' 00:15:37.751 21:01:57 -- common/autotest_common.sh@940 -- # kill -0 71091 00:15:37.751 21:01:57 -- common/autotest_common.sh@941 -- # uname 00:15:37.751 21:01:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.751 21:01:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71091 00:15:37.751 killing process with pid 71091 00:15:37.751 21:01:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:37.751 21:01:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:37.751 21:01:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71091' 00:15:37.751 21:01:57 -- common/autotest_common.sh@955 -- # kill 71091 00:15:37.751 21:01:57 -- common/autotest_common.sh@960 -- # wait 71091 00:15:38.011 21:01:58 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:15:38.011 21:01:58 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:15:38.011 21:01:58 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:15:38.011 21:01:58 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:38.011 21:01:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.011 21:01:58 -- common/autotest_common.sh@10 -- # set +x 00:15:38.011 ************************************ 00:15:38.011 START TEST ftl_fio_basic 00:15:38.011 ************************************ 00:15:38.011 21:01:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:15:38.011 * Looking for test storage... 00:15:38.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.011 21:01:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:38.011 21:01:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:38.011 21:01:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:38.011 21:01:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:38.011 21:01:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:38.011 21:01:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:38.011 21:01:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:38.011 21:01:58 -- scripts/common.sh@335 -- # IFS=.-: 00:15:38.011 21:01:58 -- scripts/common.sh@335 -- # read -ra ver1 00:15:38.011 21:01:58 -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.011 21:01:58 -- scripts/common.sh@336 -- # read -ra ver2 00:15:38.011 21:01:58 -- scripts/common.sh@337 -- # local 'op=<' 00:15:38.011 21:01:58 -- scripts/common.sh@339 -- # ver1_l=2 00:15:38.011 21:01:58 -- scripts/common.sh@340 -- # ver2_l=1 00:15:38.011 21:01:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:38.011 21:01:58 -- scripts/common.sh@343 -- # case "$op" in 00:15:38.011 21:01:58 -- scripts/common.sh@344 -- # : 1 00:15:38.011 21:01:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:38.011 21:01:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.011 21:01:59 -- scripts/common.sh@364 -- # decimal 1 00:15:38.011 21:01:59 -- scripts/common.sh@352 -- # local d=1 00:15:38.011 21:01:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.011 21:01:59 -- scripts/common.sh@354 -- # echo 1 00:15:38.011 21:01:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:38.011 21:01:59 -- scripts/common.sh@365 -- # decimal 2 00:15:38.011 21:01:59 -- scripts/common.sh@352 -- # local d=2 00:15:38.011 21:01:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.011 21:01:59 -- scripts/common.sh@354 -- # echo 2 00:15:38.011 21:01:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:38.011 21:01:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:38.011 21:01:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:38.011 21:01:59 -- scripts/common.sh@367 -- # return 0 00:15:38.011 21:01:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.011 21:01:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:38.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.011 --rc genhtml_branch_coverage=1 00:15:38.011 --rc genhtml_function_coverage=1 00:15:38.011 --rc genhtml_legend=1 00:15:38.011 --rc geninfo_all_blocks=1 00:15:38.011 --rc geninfo_unexecuted_blocks=1 00:15:38.011 00:15:38.011 ' 00:15:38.011 21:01:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:38.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.011 --rc genhtml_branch_coverage=1 00:15:38.011 --rc genhtml_function_coverage=1 00:15:38.011 --rc genhtml_legend=1 00:15:38.011 --rc geninfo_all_blocks=1 00:15:38.011 --rc geninfo_unexecuted_blocks=1 00:15:38.011 00:15:38.011 ' 00:15:38.011 21:01:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:38.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.011 --rc genhtml_branch_coverage=1 00:15:38.011 --rc genhtml_function_coverage=1 00:15:38.011 --rc genhtml_legend=1 00:15:38.011 --rc geninfo_all_blocks=1 00:15:38.011 --rc geninfo_unexecuted_blocks=1 00:15:38.011 00:15:38.011 ' 00:15:38.011 21:01:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:38.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.011 --rc genhtml_branch_coverage=1 00:15:38.011 --rc genhtml_function_coverage=1 00:15:38.011 --rc genhtml_legend=1 00:15:38.011 --rc geninfo_all_blocks=1 00:15:38.011 --rc geninfo_unexecuted_blocks=1 00:15:38.011 00:15:38.011 ' 00:15:38.011 21:01:59 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:38.011 21:01:59 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:38.011 21:01:59 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.011 21:01:59 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.011 21:01:59 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:38.011 21:01:59 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:38.011 21:01:59 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.011 21:01:59 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:38.011 21:01:59 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:38.011 21:01:59 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.011 21:01:59 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.011 21:01:59 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:38.011 21:01:59 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:38.011 21:01:59 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:38.011 21:01:59 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:38.011 21:01:59 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:38.011 21:01:59 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:38.011 21:01:59 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.011 21:01:59 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.011 21:01:59 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:38.011 21:01:59 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:38.011 21:01:59 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:38.011 21:01:59 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:38.011 21:01:59 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:38.011 21:01:59 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:38.011 21:01:59 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:38.011 21:01:59 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:38.011 21:01:59 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.011 21:01:59 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.011 21:01:59 -- ftl/fio.sh@11 -- # declare -A suite 00:15:38.011 21:01:59 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:38.011 21:01:59 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:38.011 21:01:59 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:38.011 21:01:59 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.011 21:01:59 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:15:38.011 21:01:59 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:15:38.011 21:01:59 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:38.011 21:01:59 -- ftl/fio.sh@26 -- # uuid= 00:15:38.011 21:01:59 -- ftl/fio.sh@27 -- # timeout=240 00:15:38.011 21:01:59 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:38.011 21:01:59 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:38.011 21:01:59 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:38.011 21:01:59 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:38.011 21:01:59 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:38.011 21:01:59 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:38.011 21:01:59 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:38.011 21:01:59 -- ftl/fio.sh@45 -- # svcpid=71228 00:15:38.011 21:01:59 -- ftl/fio.sh@46 -- # waitforlisten 71228 00:15:38.011 21:01:59 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:38.011 21:01:59 -- common/autotest_common.sh@829 -- # '[' -z 71228 ']' 00:15:38.011 21:01:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.012 21:01:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.012 21:01:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.012 21:01:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.012 21:01:59 -- common/autotest_common.sh@10 -- # set +x 00:15:38.271 [2024-12-08 21:01:59.144963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:38.271 [2024-12-08 21:01:59.145150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71228 ] 00:15:38.271 [2024-12-08 21:01:59.312051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.532 [2024-12-08 21:01:59.462489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:38.532 [2024-12-08 21:01:59.463380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.532 [2024-12-08 21:01:59.463432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.532 [2024-12-08 21:01:59.463422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.101 21:02:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.101 21:02:00 -- common/autotest_common.sh@862 -- # return 0 00:15:39.101 21:02:00 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:39.101 21:02:00 -- ftl/common.sh@54 -- # local name=nvme0 00:15:39.101 21:02:00 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:39.101 21:02:00 -- ftl/common.sh@56 -- # local size=103424 00:15:39.101 21:02:00 -- ftl/common.sh@59 -- # local base_bdev 00:15:39.101 21:02:00 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:39.359 21:02:00 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:39.618 21:02:00 -- ftl/common.sh@62 -- # local base_size 00:15:39.618 21:02:00 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:39.618 21:02:00 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:39.618 21:02:00 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:39.618 21:02:00 -- common/autotest_common.sh@1369 -- # local bs 00:15:39.618 21:02:00 -- common/autotest_common.sh@1370 -- # local nb 00:15:39.618 21:02:00 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:39.618 21:02:00 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:39.618 { 00:15:39.618 "name": "nvme0n1", 00:15:39.618 "aliases": [ 00:15:39.618 "a36360fd-cd21-491f-b33d-0814e4201b31" 00:15:39.618 ], 00:15:39.618 "product_name": "NVMe disk", 00:15:39.618 "block_size": 4096, 00:15:39.618 "num_blocks": 1310720, 00:15:39.618 "uuid": "a36360fd-cd21-491f-b33d-0814e4201b31", 00:15:39.618 "assigned_rate_limits": { 00:15:39.618 "rw_ios_per_sec": 0, 00:15:39.618 "rw_mbytes_per_sec": 0, 00:15:39.618 "r_mbytes_per_sec": 0, 00:15:39.618 "w_mbytes_per_sec": 0 00:15:39.618 }, 00:15:39.618 "claimed": false, 00:15:39.618 "zoned": false, 00:15:39.618 "supported_io_types": { 00:15:39.618 "read": true, 00:15:39.618 "write": true, 00:15:39.618 "unmap": true, 00:15:39.618 "write_zeroes": true, 00:15:39.618 "flush": true, 00:15:39.618 "reset": true, 00:15:39.618 "compare": true, 00:15:39.618 "compare_and_write": false, 00:15:39.618 "abort": true, 00:15:39.618 "nvme_admin": true, 00:15:39.618 "nvme_io": true 00:15:39.618 }, 00:15:39.618 "driver_specific": { 00:15:39.618 "nvme": [ 00:15:39.618 { 00:15:39.618 "pci_address": "0000:00:07.0", 00:15:39.618 "trid": { 00:15:39.618 "trtype": "PCIe", 00:15:39.618 "traddr": "0000:00:07.0" 00:15:39.618 }, 00:15:39.618 "ctrlr_data": { 00:15:39.618 "cntlid": 0, 00:15:39.618 "vendor_id": "0x1b36", 00:15:39.619 "model_number": "QEMU NVMe Ctrl", 00:15:39.619 "serial_number": "12341", 00:15:39.619 "firmware_revision": "8.0.0", 00:15:39.619 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:39.619 "oacs": { 00:15:39.619 "security": 0, 00:15:39.619 "format": 1, 00:15:39.619 "firmware": 0, 00:15:39.619 "ns_manage": 1 00:15:39.619 }, 00:15:39.619 "multi_ctrlr": false, 00:15:39.619 "ana_reporting": false 00:15:39.619 }, 00:15:39.619 "vs": { 00:15:39.619 "nvme_version": "1.4" 00:15:39.619 }, 00:15:39.619 "ns_data": { 00:15:39.619 "id": 1, 00:15:39.619 "can_share": false 00:15:39.619 } 00:15:39.619 } 00:15:39.619 ], 00:15:39.619 "mp_policy": "active_passive" 00:15:39.619 } 00:15:39.619 } 00:15:39.619 ]' 00:15:39.619 21:02:00 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:39.878 21:02:00 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:39.878 21:02:00 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:39.878 21:02:00 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:39.878 21:02:00 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:39.878 21:02:00 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:39.878 21:02:00 -- ftl/common.sh@63 -- # base_size=5120 00:15:39.878 21:02:00 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:39.878 21:02:00 -- ftl/common.sh@67 -- # clear_lvols 00:15:39.878 21:02:00 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:39.878 21:02:00 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:39.878 21:02:00 -- ftl/common.sh@28 -- # stores= 00:15:39.878 21:02:00 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:40.136 21:02:01 -- ftl/common.sh@68 -- # lvs=5bf04795-1249-4f91-9038-76af846280d7 00:15:40.136 21:02:01 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5bf04795-1249-4f91-9038-76af846280d7 00:15:40.395 21:02:01 -- ftl/fio.sh@48 -- # split_bdev=a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:40.395 21:02:01 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:40.395 21:02:01 -- ftl/common.sh@35 -- # local name=nvc0 00:15:40.395 21:02:01 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:40.395 21:02:01 -- ftl/common.sh@37 -- # local base_bdev=a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:40.395 21:02:01 -- ftl/common.sh@38 -- # local cache_size= 00:15:40.395 21:02:01 -- ftl/common.sh@41 -- # get_bdev_size a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:40.395 21:02:01 -- common/autotest_common.sh@1367 -- # local bdev_name=a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:40.395 21:02:01 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:40.395 21:02:01 -- common/autotest_common.sh@1369 -- # local bs 00:15:40.395 21:02:01 -- common/autotest_common.sh@1370 -- # local nb 00:15:40.395 21:02:01 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:40.654 21:02:01 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:40.654 { 00:15:40.654 "name": "a28ec911-2773-4f0d-b904-40f9bfda98a7", 00:15:40.654 "aliases": [ 00:15:40.654 "lvs/nvme0n1p0" 00:15:40.654 ], 00:15:40.654 "product_name": "Logical Volume", 00:15:40.654 "block_size": 4096, 00:15:40.654 "num_blocks": 26476544, 00:15:40.654 "uuid": "a28ec911-2773-4f0d-b904-40f9bfda98a7", 00:15:40.654 "assigned_rate_limits": { 00:15:40.654 "rw_ios_per_sec": 0, 00:15:40.654 "rw_mbytes_per_sec": 0, 00:15:40.654 "r_mbytes_per_sec": 0, 00:15:40.654 "w_mbytes_per_sec": 0 00:15:40.654 }, 00:15:40.654 "claimed": false, 00:15:40.654 "zoned": false, 00:15:40.654 "supported_io_types": { 00:15:40.654 "read": true, 00:15:40.654 "write": true, 00:15:40.654 "unmap": true, 00:15:40.654 "write_zeroes": true, 00:15:40.654 "flush": false, 00:15:40.654 "reset": true, 00:15:40.654 "compare": false, 00:15:40.654 "compare_and_write": false, 00:15:40.654 "abort": false, 00:15:40.654 "nvme_admin": false, 00:15:40.654 "nvme_io": false 00:15:40.654 }, 00:15:40.654 "driver_specific": { 00:15:40.654 "lvol": { 00:15:40.654 "lvol_store_uuid": "5bf04795-1249-4f91-9038-76af846280d7", 00:15:40.654 "base_bdev": "nvme0n1", 00:15:40.654 "thin_provision": true, 00:15:40.654 "snapshot": false, 00:15:40.654 "clone": false, 00:15:40.654 "esnap_clone": false 00:15:40.654 } 00:15:40.654 } 00:15:40.654 } 00:15:40.654 ]' 00:15:40.654 21:02:01 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:40.654 21:02:01 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:40.654 21:02:01 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:40.654 21:02:01 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:40.654 21:02:01 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:40.654 21:02:01 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:40.654 21:02:01 -- ftl/common.sh@41 -- # local base_size=5171 00:15:40.654 21:02:01 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:40.654 21:02:01 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:41.223 21:02:01 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:41.223 21:02:01 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:41.223 21:02:01 -- ftl/common.sh@48 -- # get_bdev_size a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:41.223 21:02:01 -- common/autotest_common.sh@1367 -- # local bdev_name=a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:41.223 21:02:01 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:41.223 21:02:01 -- common/autotest_common.sh@1369 -- # local bs 00:15:41.223 21:02:01 -- common/autotest_common.sh@1370 -- # local nb 00:15:41.223 21:02:01 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:41.223 21:02:02 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:41.223 { 00:15:41.223 "name": "a28ec911-2773-4f0d-b904-40f9bfda98a7", 00:15:41.223 "aliases": [ 00:15:41.223 "lvs/nvme0n1p0" 00:15:41.223 ], 00:15:41.223 "product_name": "Logical Volume", 00:15:41.223 "block_size": 4096, 00:15:41.223 "num_blocks": 26476544, 00:15:41.223 "uuid": "a28ec911-2773-4f0d-b904-40f9bfda98a7", 00:15:41.223 "assigned_rate_limits": { 00:15:41.223 "rw_ios_per_sec": 0, 00:15:41.223 "rw_mbytes_per_sec": 0, 00:15:41.223 "r_mbytes_per_sec": 0, 00:15:41.223 "w_mbytes_per_sec": 0 00:15:41.223 }, 00:15:41.223 "claimed": false, 00:15:41.223 "zoned": false, 00:15:41.223 "supported_io_types": { 00:15:41.223 "read": true, 00:15:41.223 "write": true, 00:15:41.223 "unmap": true, 00:15:41.223 "write_zeroes": true, 00:15:41.223 "flush": false, 00:15:41.223 "reset": true, 00:15:41.223 "compare": false, 00:15:41.223 "compare_and_write": false, 00:15:41.223 "abort": false, 00:15:41.223 "nvme_admin": false, 00:15:41.223 "nvme_io": false 00:15:41.223 }, 00:15:41.223 "driver_specific": { 00:15:41.223 "lvol": { 00:15:41.223 "lvol_store_uuid": "5bf04795-1249-4f91-9038-76af846280d7", 00:15:41.223 "base_bdev": "nvme0n1", 00:15:41.223 "thin_provision": true, 00:15:41.223 "snapshot": false, 00:15:41.223 "clone": false, 00:15:41.223 "esnap_clone": false 00:15:41.223 } 00:15:41.223 } 00:15:41.223 } 00:15:41.223 ]' 00:15:41.223 21:02:02 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:41.223 21:02:02 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:41.223 21:02:02 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:41.482 21:02:02 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:41.482 21:02:02 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:41.482 21:02:02 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:41.482 21:02:02 -- ftl/common.sh@48 -- # cache_size=5171 00:15:41.482 21:02:02 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:41.740 21:02:02 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:41.740 21:02:02 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:41.740 21:02:02 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:41.740 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:41.740 21:02:02 -- ftl/fio.sh@56 -- # get_bdev_size a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:41.740 21:02:02 -- common/autotest_common.sh@1367 -- # local bdev_name=a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:41.740 21:02:02 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:41.740 21:02:02 -- common/autotest_common.sh@1369 -- # local bs 00:15:41.740 21:02:02 -- common/autotest_common.sh@1370 -- # local nb 00:15:41.740 21:02:02 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a28ec911-2773-4f0d-b904-40f9bfda98a7 00:15:41.999 21:02:02 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:41.999 { 00:15:41.999 "name": "a28ec911-2773-4f0d-b904-40f9bfda98a7", 00:15:41.999 "aliases": [ 00:15:41.999 "lvs/nvme0n1p0" 00:15:41.999 ], 00:15:41.999 "product_name": "Logical Volume", 00:15:41.999 "block_size": 4096, 00:15:41.999 "num_blocks": 26476544, 00:15:41.999 "uuid": "a28ec911-2773-4f0d-b904-40f9bfda98a7", 00:15:41.999 "assigned_rate_limits": { 00:15:41.999 "rw_ios_per_sec": 0, 00:15:41.999 "rw_mbytes_per_sec": 0, 00:15:41.999 "r_mbytes_per_sec": 0, 00:15:41.999 "w_mbytes_per_sec": 0 00:15:41.999 }, 00:15:41.999 "claimed": false, 00:15:41.999 "zoned": false, 00:15:41.999 "supported_io_types": { 00:15:41.999 "read": true, 00:15:41.999 "write": true, 00:15:41.999 "unmap": true, 00:15:41.999 "write_zeroes": true, 00:15:41.999 "flush": false, 00:15:41.999 "reset": true, 00:15:41.999 "compare": false, 00:15:41.999 "compare_and_write": false, 00:15:41.999 "abort": false, 00:15:41.999 "nvme_admin": false, 00:15:41.999 "nvme_io": false 00:15:41.999 }, 00:15:41.999 "driver_specific": { 00:15:41.999 "lvol": { 00:15:41.999 "lvol_store_uuid": "5bf04795-1249-4f91-9038-76af846280d7", 00:15:41.999 "base_bdev": "nvme0n1", 00:15:41.999 "thin_provision": true, 00:15:41.999 "snapshot": false, 00:15:41.999 "clone": false, 00:15:41.999 "esnap_clone": false 00:15:41.999 } 00:15:41.999 } 00:15:41.999 } 00:15:41.999 ]' 00:15:41.999 21:02:02 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:41.999 21:02:02 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:41.999 21:02:02 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:41.999 21:02:02 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:41.999 21:02:02 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:41.999 21:02:02 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:41.999 21:02:02 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:41.999 21:02:02 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:41.999 21:02:02 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a28ec911-2773-4f0d-b904-40f9bfda98a7 -c nvc0n1p0 --l2p_dram_limit 60 00:15:42.259 [2024-12-08 21:02:03.105538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.105583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:42.259 [2024-12-08 21:02:03.105604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:42.259 [2024-12-08 21:02:03.105615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.105710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.105728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:42.259 [2024-12-08 21:02:03.105742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:42.259 [2024-12-08 21:02:03.105752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.105784] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:42.259 [2024-12-08 21:02:03.106690] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:42.259 [2024-12-08 21:02:03.106730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.106743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:42.259 [2024-12-08 21:02:03.106757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:15:42.259 [2024-12-08 21:02:03.106767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.106917] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 68a8d7bc-ee17-4c51-86e7-eb7ec969822a 00:15:42.259 [2024-12-08 21:02:03.107804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.107829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:42.259 [2024-12-08 21:02:03.107842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:15:42.259 [2024-12-08 21:02:03.107854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.111889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.112127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:42.259 [2024-12-08 21:02:03.112154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.963 ms 00:15:42.259 [2024-12-08 21:02:03.112169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.112277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.112314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:42.259 [2024-12-08 21:02:03.112327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:15:42.259 [2024-12-08 21:02:03.112342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.112499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.112519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:42.259 [2024-12-08 21:02:03.112531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:42.259 [2024-12-08 21:02:03.112546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.112595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:42.259 [2024-12-08 21:02:03.116705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.116738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:42.259 [2024-12-08 21:02:03.116786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.121 ms 00:15:42.259 [2024-12-08 21:02:03.116796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.116843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.116856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:42.259 [2024-12-08 21:02:03.116869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:42.259 [2024-12-08 21:02:03.116879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.116935] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:42.259 [2024-12-08 21:02:03.117050] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:42.259 [2024-12-08 21:02:03.117109] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:42.259 [2024-12-08 21:02:03.117128] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:42.259 [2024-12-08 21:02:03.117152] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:42.259 [2024-12-08 21:02:03.117166] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:42.259 [2024-12-08 21:02:03.117179] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:42.259 [2024-12-08 21:02:03.117188] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:42.259 [2024-12-08 21:02:03.117204] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:42.259 [2024-12-08 21:02:03.117214] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:42.259 [2024-12-08 21:02:03.117228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.117238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:42.259 [2024-12-08 21:02:03.117250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:15:42.259 [2024-12-08 21:02:03.117260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.117346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.259 [2024-12-08 21:02:03.117361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:42.259 [2024-12-08 21:02:03.117374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:15:42.259 [2024-12-08 21:02:03.117384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.259 [2024-12-08 21:02:03.117496] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:42.259 [2024-12-08 21:02:03.117511] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:42.259 [2024-12-08 21:02:03.117524] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:42.259 [2024-12-08 21:02:03.117535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:42.259 [2024-12-08 21:02:03.117551] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:42.259 [2024-12-08 21:02:03.117561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:42.259 [2024-12-08 21:02:03.117573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:42.259 [2024-12-08 21:02:03.117583] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:42.259 [2024-12-08 21:02:03.117599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:42.259 [2024-12-08 21:02:03.117609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:42.259 [2024-12-08 21:02:03.117620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:42.259 [2024-12-08 21:02:03.117631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:42.259 [2024-12-08 21:02:03.117644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:42.259 [2024-12-08 21:02:03.117653] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:42.259 [2024-12-08 21:02:03.117664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:42.259 [2024-12-08 21:02:03.117674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:42.259 [2024-12-08 21:02:03.117686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:42.259 [2024-12-08 21:02:03.117696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:42.259 [2024-12-08 21:02:03.117707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:42.259 [2024-12-08 21:02:03.117716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:42.259 [2024-12-08 21:02:03.117733] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:42.259 [2024-12-08 21:02:03.117745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:42.259 [2024-12-08 21:02:03.117756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:42.259 [2024-12-08 21:02:03.117766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:42.259 [2024-12-08 21:02:03.117777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:42.259 [2024-12-08 21:02:03.117786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:42.259 [2024-12-08 21:02:03.117797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:42.259 [2024-12-08 21:02:03.117807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:42.259 [2024-12-08 21:02:03.117818] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:42.259 [2024-12-08 21:02:03.117827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:42.259 [2024-12-08 21:02:03.117838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:42.259 [2024-12-08 21:02:03.117848] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:42.259 [2024-12-08 21:02:03.117861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:42.259 [2024-12-08 21:02:03.117888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:42.259 [2024-12-08 21:02:03.117901] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:42.259 [2024-12-08 21:02:03.117911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:42.259 [2024-12-08 21:02:03.117925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:42.259 [2024-12-08 21:02:03.117935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:42.259 [2024-12-08 21:02:03.117946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:42.259 [2024-12-08 21:02:03.117955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:42.259 [2024-12-08 21:02:03.117966] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:42.260 [2024-12-08 21:02:03.117976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:42.260 [2024-12-08 21:02:03.117988] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:42.260 [2024-12-08 21:02:03.117999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:42.260 [2024-12-08 21:02:03.118011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:42.260 [2024-12-08 21:02:03.118021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:42.260 [2024-12-08 21:02:03.118032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:42.260 [2024-12-08 21:02:03.118041] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:42.260 [2024-12-08 21:02:03.118054] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:42.260 [2024-12-08 21:02:03.118064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:42.260 [2024-12-08 21:02:03.118092] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:42.260 [2024-12-08 21:02:03.118108] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:42.260 [2024-12-08 21:02:03.118124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:42.260 [2024-12-08 21:02:03.118135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:42.260 [2024-12-08 21:02:03.118147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:42.260 [2024-12-08 21:02:03.118157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:42.260 [2024-12-08 21:02:03.118168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:42.260 [2024-12-08 21:02:03.118178] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:42.260 [2024-12-08 21:02:03.118190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:42.260 [2024-12-08 21:02:03.118201] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:42.260 [2024-12-08 21:02:03.118212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:42.260 [2024-12-08 21:02:03.118222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:42.260 [2024-12-08 21:02:03.118235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:42.260 [2024-12-08 21:02:03.118246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:42.260 [2024-12-08 21:02:03.118260] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:42.260 [2024-12-08 21:02:03.118270] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:42.260 [2024-12-08 21:02:03.118283] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:42.260 [2024-12-08 21:02:03.118297] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:42.260 [2024-12-08 21:02:03.118311] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:42.260 [2024-12-08 21:02:03.118321] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:42.260 [2024-12-08 21:02:03.118333] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:42.260 [2024-12-08 21:02:03.118345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.260 [2024-12-08 21:02:03.118357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:42.260 [2024-12-08 21:02:03.118367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:15:42.260 [2024-12-08 21:02:03.118379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.260 [2024-12-08 21:02:03.134055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.260 [2024-12-08 21:02:03.134112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:42.260 [2024-12-08 21:02:03.134146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.603 ms 00:15:42.260 [2024-12-08 21:02:03.134158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.260 [2024-12-08 21:02:03.134248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.260 [2024-12-08 21:02:03.134266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:42.260 [2024-12-08 21:02:03.134277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:42.260 [2024-12-08 21:02:03.134288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.260 [2024-12-08 21:02:03.167953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.260 [2024-12-08 21:02:03.168020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:42.260 [2024-12-08 21:02:03.168037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.598 ms 00:15:42.260 [2024-12-08 21:02:03.168049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.260 [2024-12-08 21:02:03.168132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.260 [2024-12-08 21:02:03.168158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:42.260 [2024-12-08 21:02:03.168170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:42.260 [2024-12-08 21:02:03.168182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.260 [2024-12-08 21:02:03.168589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.260 [2024-12-08 21:02:03.168620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:42.260 [2024-12-08 21:02:03.168635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:15:42.260 [2024-12-08 21:02:03.168648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.260 [2024-12-08 21:02:03.168807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.260 [2024-12-08 21:02:03.168828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:42.260 [2024-12-08 21:02:03.168840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:15:42.260 [2024-12-08 21:02:03.168852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.260 [2024-12-08 21:02:03.192768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.260 [2024-12-08 21:02:03.192812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:42.260 [2024-12-08 21:02:03.192844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.871 ms 00:15:42.260 [2024-12-08 21:02:03.192857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.260 [2024-12-08 21:02:03.204458] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:42.260 [2024-12-08 21:02:03.217065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.260 [2024-12-08 21:02:03.217376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:42.260 [2024-12-08 21:02:03.217422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.076 ms 00:15:42.260 [2024-12-08 21:02:03.217436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.525 [2024-12-08 21:02:03.311509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.525 [2024-12-08 21:02:03.311576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:42.525 [2024-12-08 21:02:03.311614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.989 ms 00:15:42.525 [2024-12-08 21:02:03.311625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.525 [2024-12-08 21:02:03.311689] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:42.525 [2024-12-08 21:02:03.311709] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:46.838 [2024-12-08 21:02:07.695222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.838 [2024-12-08 21:02:07.695290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:46.838 [2024-12-08 21:02:07.695327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4383.570 ms 00:15:46.838 [2024-12-08 21:02:07.695339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.838 [2024-12-08 21:02:07.695552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.838 [2024-12-08 21:02:07.695569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:46.838 [2024-12-08 21:02:07.695583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:15:46.838 [2024-12-08 21:02:07.695593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.838 [2024-12-08 21:02:07.722990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.838 [2024-12-08 21:02:07.723028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:46.838 [2024-12-08 21:02:07.723062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.328 ms 00:15:46.838 [2024-12-08 21:02:07.723073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.838 [2024-12-08 21:02:07.749548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.838 [2024-12-08 21:02:07.749585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:46.838 [2024-12-08 21:02:07.749621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.410 ms 00:15:46.838 [2024-12-08 21:02:07.749632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.838 [2024-12-08 21:02:07.749955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.838 [2024-12-08 21:02:07.749974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:46.838 [2024-12-08 21:02:07.749987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:15:46.838 [2024-12-08 21:02:07.749998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.838 [2024-12-08 21:02:07.828486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.838 [2024-12-08 21:02:07.828527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:46.838 [2024-12-08 21:02:07.828562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.426 ms 00:15:46.838 [2024-12-08 21:02:07.828573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.838 [2024-12-08 21:02:07.857363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.838 [2024-12-08 21:02:07.857421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:46.838 [2024-12-08 21:02:07.857458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.750 ms 00:15:46.838 [2024-12-08 21:02:07.857470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.838 [2024-12-08 21:02:07.861332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.838 [2024-12-08 21:02:07.861368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:46.838 [2024-12-08 21:02:07.861386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.822 ms 00:15:46.838 [2024-12-08 21:02:07.861397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.098 [2024-12-08 21:02:07.892111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.098 [2024-12-08 21:02:07.892176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:47.098 [2024-12-08 21:02:07.892210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.629 ms 00:15:47.098 [2024-12-08 21:02:07.892221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.098 [2024-12-08 21:02:07.892273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.098 [2024-12-08 21:02:07.892290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:47.098 [2024-12-08 21:02:07.892304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:15:47.098 [2024-12-08 21:02:07.892315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.098 [2024-12-08 21:02:07.892502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.098 [2024-12-08 21:02:07.892525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:47.098 [2024-12-08 21:02:07.892544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:15:47.098 [2024-12-08 21:02:07.892555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.098 [2024-12-08 21:02:07.893821] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4787.803 ms, result 0 00:15:47.098 { 00:15:47.098 "name": "ftl0", 00:15:47.098 "uuid": "68a8d7bc-ee17-4c51-86e7-eb7ec969822a" 00:15:47.098 } 00:15:47.098 21:02:07 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:47.098 21:02:07 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:15:47.098 21:02:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:47.098 21:02:07 -- common/autotest_common.sh@899 -- # local i 00:15:47.098 21:02:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:47.098 21:02:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:47.098 21:02:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:47.357 21:02:08 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:47.357 [ 00:15:47.357 { 00:15:47.357 "name": "ftl0", 00:15:47.357 "aliases": [ 00:15:47.357 "68a8d7bc-ee17-4c51-86e7-eb7ec969822a" 00:15:47.357 ], 00:15:47.357 "product_name": "FTL disk", 00:15:47.357 "block_size": 4096, 00:15:47.357 "num_blocks": 20971520, 00:15:47.357 "uuid": "68a8d7bc-ee17-4c51-86e7-eb7ec969822a", 00:15:47.357 "assigned_rate_limits": { 00:15:47.357 "rw_ios_per_sec": 0, 00:15:47.357 "rw_mbytes_per_sec": 0, 00:15:47.357 "r_mbytes_per_sec": 0, 00:15:47.357 "w_mbytes_per_sec": 0 00:15:47.357 }, 00:15:47.358 "claimed": false, 00:15:47.358 "zoned": false, 00:15:47.358 "supported_io_types": { 00:15:47.358 "read": true, 00:15:47.358 "write": true, 00:15:47.358 "unmap": true, 00:15:47.358 "write_zeroes": true, 00:15:47.358 "flush": true, 00:15:47.358 "reset": false, 00:15:47.358 "compare": false, 00:15:47.358 "compare_and_write": false, 00:15:47.358 "abort": false, 00:15:47.358 "nvme_admin": false, 00:15:47.358 "nvme_io": false 00:15:47.358 }, 00:15:47.358 "driver_specific": { 00:15:47.358 "ftl": { 00:15:47.358 "base_bdev": "a28ec911-2773-4f0d-b904-40f9bfda98a7", 00:15:47.358 "cache": "nvc0n1p0" 00:15:47.358 } 00:15:47.358 } 00:15:47.358 } 00:15:47.358 ] 00:15:47.358 21:02:08 -- common/autotest_common.sh@905 -- # return 0 00:15:47.358 21:02:08 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:47.358 21:02:08 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:47.617 21:02:08 -- ftl/fio.sh@70 -- # echo ']}' 00:15:47.617 21:02:08 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:47.876 [2024-12-08 21:02:08.830627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.876 [2024-12-08 21:02:08.830850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:47.876 [2024-12-08 21:02:08.830880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:47.876 [2024-12-08 21:02:08.830894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.876 [2024-12-08 21:02:08.830946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:47.876 [2024-12-08 21:02:08.834026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.876 [2024-12-08 21:02:08.834234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:47.876 [2024-12-08 21:02:08.834267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.052 ms 00:15:47.876 [2024-12-08 21:02:08.834279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.876 [2024-12-08 21:02:08.834783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.876 [2024-12-08 21:02:08.834801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:47.876 [2024-12-08 21:02:08.834814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:15:47.876 [2024-12-08 21:02:08.834824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.876 [2024-12-08 21:02:08.837806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.876 [2024-12-08 21:02:08.837978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:47.876 [2024-12-08 21:02:08.838007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.953 ms 00:15:47.876 [2024-12-08 21:02:08.838019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.876 [2024-12-08 21:02:08.843901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.876 [2024-12-08 21:02:08.843929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:47.876 [2024-12-08 21:02:08.843960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.837 ms 00:15:47.876 [2024-12-08 21:02:08.843970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.876 [2024-12-08 21:02:08.870794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.876 [2024-12-08 21:02:08.870831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:47.876 [2024-12-08 21:02:08.870864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.739 ms 00:15:47.876 [2024-12-08 21:02:08.870875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.876 [2024-12-08 21:02:08.891057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.876 [2024-12-08 21:02:08.891133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:47.876 [2024-12-08 21:02:08.891171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.130 ms 00:15:47.876 [2024-12-08 21:02:08.891184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.876 [2024-12-08 21:02:08.891449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.876 [2024-12-08 21:02:08.891489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:47.876 [2024-12-08 21:02:08.891523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:15:47.876 [2024-12-08 21:02:08.891535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.138 [2024-12-08 21:02:08.921017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.138 [2024-12-08 21:02:08.921255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:48.138 [2024-12-08 21:02:08.921288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.448 ms 00:15:48.138 [2024-12-08 21:02:08.921300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.138 [2024-12-08 21:02:08.948247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.138 [2024-12-08 21:02:08.948457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:48.138 [2024-12-08 21:02:08.948491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.890 ms 00:15:48.138 [2024-12-08 21:02:08.948504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.138 [2024-12-08 21:02:08.976659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.138 [2024-12-08 21:02:08.976727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:48.138 [2024-12-08 21:02:08.976775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.094 ms 00:15:48.138 [2024-12-08 21:02:08.976785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.138 [2024-12-08 21:02:09.002712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.138 [2024-12-08 21:02:09.002748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:48.138 [2024-12-08 21:02:09.002781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.804 ms 00:15:48.138 [2024-12-08 21:02:09.002791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.138 [2024-12-08 21:02:09.002843] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:48.138 [2024-12-08 21:02:09.002863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.002877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.002888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.002900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.002911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.002922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.002932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.002960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.002971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.002983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.002993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:48.138 [2024-12-08 21:02:09.003532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.003988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:48.139 [2024-12-08 21:02:09.004183] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:48.139 [2024-12-08 21:02:09.004196] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68a8d7bc-ee17-4c51-86e7-eb7ec969822a 00:15:48.139 [2024-12-08 21:02:09.004207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:48.139 [2024-12-08 21:02:09.004219] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:48.139 [2024-12-08 21:02:09.004229] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:48.139 [2024-12-08 21:02:09.004258] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:48.139 [2024-12-08 21:02:09.004269] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:48.139 [2024-12-08 21:02:09.004282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:48.139 [2024-12-08 21:02:09.004292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:48.139 [2024-12-08 21:02:09.004303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:48.139 [2024-12-08 21:02:09.004313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:48.139 [2024-12-08 21:02:09.004328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.139 [2024-12-08 21:02:09.004341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:48.139 [2024-12-08 21:02:09.004354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:15:48.139 [2024-12-08 21:02:09.004365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.139 [2024-12-08 21:02:09.018745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.139 [2024-12-08 21:02:09.018779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:48.139 [2024-12-08 21:02:09.018798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.278 ms 00:15:48.139 [2024-12-08 21:02:09.018809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.139 [2024-12-08 21:02:09.019014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.139 [2024-12-08 21:02:09.019032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:48.139 [2024-12-08 21:02:09.019046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:15:48.139 [2024-12-08 21:02:09.019055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.139 [2024-12-08 21:02:09.067218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.139 [2024-12-08 21:02:09.067263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:48.139 [2024-12-08 21:02:09.067296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.139 [2024-12-08 21:02:09.067307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.139 [2024-12-08 21:02:09.067380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.139 [2024-12-08 21:02:09.067394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:48.139 [2024-12-08 21:02:09.067407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.139 [2024-12-08 21:02:09.067417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.139 [2024-12-08 21:02:09.067540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.139 [2024-12-08 21:02:09.067559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:48.139 [2024-12-08 21:02:09.067572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.139 [2024-12-08 21:02:09.067582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.139 [2024-12-08 21:02:09.067615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.139 [2024-12-08 21:02:09.067629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:48.139 [2024-12-08 21:02:09.067642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.139 [2024-12-08 21:02:09.067652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.139 [2024-12-08 21:02:09.160553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.139 [2024-12-08 21:02:09.160870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:48.139 [2024-12-08 21:02:09.160902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.139 [2024-12-08 21:02:09.160930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.399 [2024-12-08 21:02:09.194816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.399 [2024-12-08 21:02:09.194850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:48.399 [2024-12-08 21:02:09.194883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.399 [2024-12-08 21:02:09.194894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.399 [2024-12-08 21:02:09.194980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.399 [2024-12-08 21:02:09.194996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:48.399 [2024-12-08 21:02:09.195010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.399 [2024-12-08 21:02:09.195020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.399 [2024-12-08 21:02:09.195129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.399 [2024-12-08 21:02:09.195162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:48.399 [2024-12-08 21:02:09.195179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.399 [2024-12-08 21:02:09.195190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.399 [2024-12-08 21:02:09.195346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.399 [2024-12-08 21:02:09.195365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:48.399 [2024-12-08 21:02:09.195379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.399 [2024-12-08 21:02:09.195390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.399 [2024-12-08 21:02:09.195473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.399 [2024-12-08 21:02:09.195507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:48.399 [2024-12-08 21:02:09.195550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.399 [2024-12-08 21:02:09.195563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.399 [2024-12-08 21:02:09.195618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.399 [2024-12-08 21:02:09.195638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:48.399 [2024-12-08 21:02:09.195653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.399 [2024-12-08 21:02:09.195663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.399 [2024-12-08 21:02:09.195729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.399 [2024-12-08 21:02:09.195745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:48.399 [2024-12-08 21:02:09.195762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.399 [2024-12-08 21:02:09.195772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.399 [2024-12-08 21:02:09.195973] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 365.288 ms, result 0 00:15:48.399 true 00:15:48.399 21:02:09 -- ftl/fio.sh@75 -- # killprocess 71228 00:15:48.399 21:02:09 -- common/autotest_common.sh@936 -- # '[' -z 71228 ']' 00:15:48.399 21:02:09 -- common/autotest_common.sh@940 -- # kill -0 71228 00:15:48.399 21:02:09 -- common/autotest_common.sh@941 -- # uname 00:15:48.399 21:02:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.399 21:02:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71228 00:15:48.399 killing process with pid 71228 00:15:48.399 21:02:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.399 21:02:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.399 21:02:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71228' 00:15:48.399 21:02:09 -- common/autotest_common.sh@955 -- # kill 71228 00:15:48.399 21:02:09 -- common/autotest_common.sh@960 -- # wait 71228 00:15:52.590 21:02:13 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:52.590 21:02:13 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:52.590 21:02:13 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:52.590 21:02:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.590 21:02:13 -- common/autotest_common.sh@10 -- # set +x 00:15:52.590 21:02:13 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:52.590 21:02:13 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:52.590 21:02:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:52.590 21:02:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:52.590 21:02:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:52.590 21:02:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:52.590 21:02:13 -- common/autotest_common.sh@1330 -- # shift 00:15:52.590 21:02:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:52.590 21:02:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:52.590 21:02:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:52.590 21:02:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:52.590 21:02:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:52.590 21:02:13 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:52.591 21:02:13 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:52.591 21:02:13 -- common/autotest_common.sh@1336 -- # break 00:15:52.591 21:02:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:52.591 21:02:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:52.591 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:52.591 fio-3.35 00:15:52.591 Starting 1 thread 00:15:57.859 00:15:57.859 test: (groupid=0, jobs=1): err= 0: pid=71453: Sun Dec 8 21:02:18 2024 00:15:57.859 read: IOPS=955, BW=63.5MiB/s (66.6MB/s)(255MiB/4010msec) 00:15:57.859 slat (nsec): min=5053, max=53225, avg=6759.76, stdev=3025.15 00:15:57.859 clat (usec): min=358, max=1054, avg=467.21, stdev=46.83 00:15:57.859 lat (usec): min=364, max=1060, avg=473.97, stdev=47.63 00:15:57.859 clat percentiles (usec): 00:15:57.859 | 1.00th=[ 383], 5.00th=[ 412], 10.00th=[ 424], 20.00th=[ 437], 00:15:57.859 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 457], 60.00th=[ 465], 00:15:57.859 | 70.00th=[ 482], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 562], 00:15:57.859 | 99.00th=[ 611], 99.50th=[ 619], 99.90th=[ 685], 99.95th=[ 840], 00:15:57.859 | 99.99th=[ 1057] 00:15:57.859 write: IOPS=962, BW=63.9MiB/s (67.0MB/s)(256MiB/4005msec); 0 zone resets 00:15:57.859 slat (nsec): min=17336, max=75753, avg=22353.05, stdev=5465.38 00:15:57.859 clat (usec): min=394, max=1263, avg=532.78, stdev=59.72 00:15:57.859 lat (usec): min=414, max=1290, avg=555.13, stdev=60.75 00:15:57.859 clat percentiles (usec): 00:15:57.859 | 1.00th=[ 429], 5.00th=[ 449], 10.00th=[ 469], 20.00th=[ 494], 00:15:57.859 | 30.00th=[ 506], 40.00th=[ 515], 50.00th=[ 523], 60.00th=[ 537], 00:15:57.859 | 70.00th=[ 545], 80.00th=[ 570], 90.00th=[ 603], 95.00th=[ 627], 00:15:57.859 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 930], 99.95th=[ 1057], 00:15:57.859 | 99.99th=[ 1270] 00:15:57.860 bw ( KiB/s): min=64063, max=66239, per=100.00%, avg=65502.75, stdev=750.80, samples=8 00:15:57.860 iops : min= 942, max= 974, avg=963.25, stdev=11.06, samples=8 00:15:57.860 lat (usec) : 500=51.44%, 750=47.83%, 1000=0.69% 00:15:57.860 lat (msec) : 2=0.04% 00:15:57.860 cpu : usr=99.20%, sys=0.15%, ctx=6, majf=0, minf=1318 00:15:57.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.860 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.860 00:15:57.860 Run status group 0 (all jobs): 00:15:57.860 READ: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=255MiB (267MB), run=4010-4010msec 00:15:57.860 WRITE: bw=63.9MiB/s (67.0MB/s), 63.9MiB/s-63.9MiB/s (67.0MB/s-67.0MB/s), io=256MiB (269MB), run=4005-4005msec 00:15:59.234 ----------------------------------------------------- 00:15:59.234 Suppressions used: 00:15:59.234 count bytes template 00:15:59.234 1 5 /usr/src/fio/parse.c 00:15:59.234 1 8 libtcmalloc_minimal.so 00:15:59.234 1 904 libcrypto.so 00:15:59.234 ----------------------------------------------------- 00:15:59.234 00:15:59.234 21:02:20 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:59.234 21:02:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:59.234 21:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:59.234 21:02:20 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:59.234 21:02:20 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:59.234 21:02:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:59.234 21:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:59.234 21:02:20 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:59.234 21:02:20 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:59.234 21:02:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:59.234 21:02:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:59.234 21:02:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:59.234 21:02:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:59.234 21:02:20 -- common/autotest_common.sh@1330 -- # shift 00:15:59.234 21:02:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:59.234 21:02:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.234 21:02:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:59.234 21:02:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:59.234 21:02:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:59.234 21:02:20 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:59.234 21:02:20 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:59.234 21:02:20 -- common/autotest_common.sh@1336 -- # break 00:15:59.234 21:02:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:59.234 21:02:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:59.493 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:59.493 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:59.493 fio-3.35 00:15:59.493 Starting 2 threads 00:16:31.651 00:16:31.651 first_half: (groupid=0, jobs=1): err= 0: pid=71557: Sun Dec 8 21:02:49 2024 00:16:31.651 read: IOPS=2311, BW=9244KiB/s (9466kB/s)(256MiB/28331msec) 00:16:31.651 slat (nsec): min=4053, max=50538, avg=7681.66, stdev=3528.42 00:16:31.651 clat (usec): min=993, max=297235, avg=47567.81, stdev=26036.04 00:16:31.651 lat (usec): min=997, max=297261, avg=47575.50, stdev=26036.19 00:16:31.651 clat percentiles (msec): 00:16:31.651 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:16:31.651 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 42], 00:16:31.651 | 70.00th=[ 43], 80.00th=[ 49], 90.00th=[ 51], 95.00th=[ 87], 00:16:31.651 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 232], 99.95th=[ 259], 00:16:31.651 | 99.99th=[ 288] 00:16:31.651 write: IOPS=2317, BW=9270KiB/s (9492kB/s)(256MiB/28280msec); 0 zone resets 00:16:31.651 slat (usec): min=4, max=191, avg= 8.57, stdev= 5.57 00:16:31.651 clat (usec): min=455, max=48136, avg=7770.68, stdev=7965.15 00:16:31.651 lat (usec): min=466, max=48142, avg=7779.25, stdev=7965.37 00:16:31.651 clat percentiles (usec): 00:16:31.651 | 1.00th=[ 1123], 5.00th=[ 1516], 10.00th=[ 1827], 20.00th=[ 3326], 00:16:31.651 | 30.00th=[ 4113], 40.00th=[ 5211], 50.00th=[ 5800], 60.00th=[ 6652], 00:16:31.651 | 70.00th=[ 7242], 80.00th=[ 8717], 90.00th=[14484], 95.00th=[23462], 00:16:31.651 | 99.00th=[43779], 99.50th=[44827], 99.90th=[46400], 99.95th=[46924], 00:16:31.651 | 99.99th=[47449] 00:16:31.651 bw ( KiB/s): min= 184, max=46688, per=100.00%, avg=21694.17, stdev=14840.82, samples=24 00:16:31.651 iops : min= 46, max=11672, avg=5423.54, stdev=3710.21, samples=24 00:16:31.651 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.22% 00:16:31.651 lat (msec) : 2=5.69%, 4=8.36%, 10=27.19%, 20=7.22%, 50=44.03% 00:16:31.651 lat (msec) : 100=5.02%, 250=2.17%, 500=0.03% 00:16:31.651 cpu : usr=98.85%, sys=0.51%, ctx=56, majf=0, minf=5552 00:16:31.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:31.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.651 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.651 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.651 second_half: (groupid=0, jobs=1): err= 0: pid=71558: Sun Dec 8 21:02:49 2024 00:16:31.651 read: IOPS=2333, BW=9334KiB/s (9558kB/s)(256MiB/28066msec) 00:16:31.651 slat (nsec): min=4077, max=45796, avg=7625.33, stdev=3457.15 00:16:31.651 clat (msec): min=10, max=227, avg=47.92, stdev=22.90 00:16:31.651 lat (msec): min=10, max=227, avg=47.93, stdev=22.90 00:16:31.651 clat percentiles (msec): 00:16:31.651 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:16:31.651 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 42], 00:16:31.651 | 70.00th=[ 44], 80.00th=[ 49], 90.00th=[ 51], 95.00th=[ 81], 00:16:31.651 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 201], 99.95th=[ 205], 00:16:31.651 | 99.99th=[ 222] 00:16:31.651 write: IOPS=2348, BW=9394KiB/s (9620kB/s)(256MiB/27905msec); 0 zone resets 00:16:31.651 slat (usec): min=4, max=696, avg= 8.32, stdev= 6.04 00:16:31.651 clat (usec): min=516, max=44520, avg=6900.64, stdev=4364.40 00:16:31.651 lat (usec): min=529, max=44527, avg=6908.96, stdev=4364.66 00:16:31.651 clat percentiles (usec): 00:16:31.651 | 1.00th=[ 1287], 5.00th=[ 2073], 10.00th=[ 2966], 20.00th=[ 3818], 00:16:31.651 | 30.00th=[ 4817], 40.00th=[ 5473], 50.00th=[ 5997], 60.00th=[ 6783], 00:16:31.651 | 70.00th=[ 7177], 80.00th=[ 8356], 90.00th=[13042], 95.00th=[14877], 00:16:31.651 | 99.00th=[22938], 99.50th=[32113], 99.90th=[39584], 99.95th=[42730], 00:16:31.651 | 99.99th=[44303] 00:16:31.651 bw ( KiB/s): min= 2016, max=47688, per=100.00%, avg=22795.13, stdev=13958.37, samples=23 00:16:31.651 iops : min= 504, max=11922, avg=5698.78, stdev=3489.59, samples=23 00:16:31.651 lat (usec) : 750=0.04%, 1000=0.19% 00:16:31.651 lat (msec) : 2=2.05%, 4=9.20%, 10=30.51%, 20=7.43%, 50=43.16% 00:16:31.651 lat (msec) : 100=5.40%, 250=2.04% 00:16:31.651 cpu : usr=98.87%, sys=0.51%, ctx=52, majf=0, minf=5563 00:16:31.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:31.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.651 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.651 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.651 00:16:31.651 Run status group 0 (all jobs): 00:16:31.651 READ: bw=18.1MiB/s (18.9MB/s), 9244KiB/s-9334KiB/s (9466kB/s-9558kB/s), io=512MiB (536MB), run=28066-28331msec 00:16:31.651 WRITE: bw=18.1MiB/s (19.0MB/s), 9270KiB/s-9394KiB/s (9492kB/s-9620kB/s), io=512MiB (537MB), run=27905-28280msec 00:16:31.651 ----------------------------------------------------- 00:16:31.651 Suppressions used: 00:16:31.651 count bytes template 00:16:31.651 2 10 /usr/src/fio/parse.c 00:16:31.651 2 192 /usr/src/fio/iolog.c 00:16:31.651 1 8 libtcmalloc_minimal.so 00:16:31.651 1 904 libcrypto.so 00:16:31.651 ----------------------------------------------------- 00:16:31.651 00:16:31.651 21:02:51 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:31.651 21:02:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.651 21:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:31.651 21:02:51 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:31.651 21:02:51 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:31.651 21:02:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:31.651 21:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:31.651 21:02:51 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:31.651 21:02:51 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:31.651 21:02:51 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:31.651 21:02:51 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:31.651 21:02:51 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:31.651 21:02:51 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.652 21:02:51 -- common/autotest_common.sh@1330 -- # shift 00:16:31.652 21:02:51 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:31.652 21:02:51 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:31.652 21:02:51 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.652 21:02:51 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:31.652 21:02:51 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:31.652 21:02:51 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:31.652 21:02:51 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:31.652 21:02:51 -- common/autotest_common.sh@1336 -- # break 00:16:31.652 21:02:51 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:31.652 21:02:51 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:31.652 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:31.652 fio-3.35 00:16:31.652 Starting 1 thread 00:16:49.749 00:16:49.749 test: (groupid=0, jobs=1): err= 0: pid=71914: Sun Dec 8 21:03:09 2024 00:16:49.749 read: IOPS=5774, BW=22.6MiB/s (23.7MB/s)(255MiB/11292msec) 00:16:49.749 slat (nsec): min=4165, max=52770, avg=7067.12, stdev=3635.39 00:16:49.749 clat (usec): min=863, max=42988, avg=22156.38, stdev=1025.55 00:16:49.749 lat (usec): min=867, max=42994, avg=22163.45, stdev=1025.57 00:16:49.749 clat percentiles (usec): 00:16:49.749 | 1.00th=[20841], 5.00th=[21365], 10.00th=[21365], 20.00th=[21627], 00:16:49.749 | 30.00th=[21890], 40.00th=[21890], 50.00th=[22152], 60.00th=[22152], 00:16:49.749 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23200], 00:16:49.749 | 99.00th=[25822], 99.50th=[26346], 99.90th=[32113], 99.95th=[38011], 00:16:49.749 | 99.99th=[41681] 00:16:49.749 write: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(256MiB/5546msec); 0 zone resets 00:16:49.749 slat (usec): min=4, max=345, avg= 9.69, stdev= 6.48 00:16:49.749 clat (usec): min=633, max=66811, avg=10767.50, stdev=13837.90 00:16:49.749 lat (usec): min=642, max=66819, avg=10777.19, stdev=13837.99 00:16:49.749 clat percentiles (usec): 00:16:49.749 | 1.00th=[ 988], 5.00th=[ 1205], 10.00th=[ 1319], 20.00th=[ 1500], 00:16:49.749 | 30.00th=[ 1713], 40.00th=[ 2147], 50.00th=[ 6915], 60.00th=[ 7898], 00:16:49.749 | 70.00th=[ 8848], 80.00th=[10159], 90.00th=[40633], 95.00th=[42730], 00:16:49.749 | 99.00th=[45876], 99.50th=[47449], 99.90th=[50594], 99.95th=[54789], 00:16:49.749 | 99.99th=[64226] 00:16:49.749 bw ( KiB/s): min= 2224, max=66392, per=92.43%, avg=43690.67, stdev=16249.89, samples=12 00:16:49.749 iops : min= 556, max=16598, avg=10922.67, stdev=4062.47, samples=12 00:16:49.749 lat (usec) : 750=0.01%, 1000=0.57% 00:16:49.749 lat (msec) : 2=18.68%, 4=1.67%, 10=18.72%, 20=2.56%, 50=57.73% 00:16:49.749 lat (msec) : 100=0.06% 00:16:49.749 cpu : usr=98.03%, sys=1.22%, ctx=35, majf=0, minf=5567 00:16:49.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:49.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.749 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:49.749 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:49.749 00:16:49.749 Run status group 0 (all jobs): 00:16:49.749 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=255MiB (267MB), run=11292-11292msec 00:16:49.749 WRITE: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=256MiB (268MB), run=5546-5546msec 00:16:50.319 ----------------------------------------------------- 00:16:50.319 Suppressions used: 00:16:50.319 count bytes template 00:16:50.319 1 5 /usr/src/fio/parse.c 00:16:50.319 2 192 /usr/src/fio/iolog.c 00:16:50.319 1 8 libtcmalloc_minimal.so 00:16:50.319 1 904 libcrypto.so 00:16:50.319 ----------------------------------------------------- 00:16:50.319 00:16:50.319 21:03:11 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:50.319 21:03:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:50.319 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:16:50.319 21:03:11 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:50.319 Remove shared memory files 00:16:50.319 21:03:11 -- ftl/fio.sh@85 -- # remove_shm 00:16:50.319 21:03:11 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:50.319 21:03:11 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:50.578 21:03:11 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:50.578 21:03:11 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56478 /dev/shm/spdk_tgt_trace.pid70145 00:16:50.578 21:03:11 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:50.578 21:03:11 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:50.578 ************************************ 00:16:50.578 END TEST ftl_fio_basic 00:16:50.578 ************************************ 00:16:50.578 00:16:50.578 real 1m12.538s 00:16:50.578 user 2m42.253s 00:16:50.578 sys 0m3.778s 00:16:50.578 21:03:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:50.578 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:16:50.578 21:03:11 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:16:50.578 21:03:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:50.578 21:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.578 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:16:50.578 ************************************ 00:16:50.578 START TEST ftl_bdevperf 00:16:50.578 ************************************ 00:16:50.578 21:03:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:16:50.578 * Looking for test storage... 00:16:50.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.578 21:03:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:50.578 21:03:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:50.578 21:03:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:50.578 21:03:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:50.578 21:03:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:50.578 21:03:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:50.578 21:03:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:50.578 21:03:11 -- scripts/common.sh@335 -- # IFS=.-: 00:16:50.578 21:03:11 -- scripts/common.sh@335 -- # read -ra ver1 00:16:50.578 21:03:11 -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.578 21:03:11 -- scripts/common.sh@336 -- # read -ra ver2 00:16:50.578 21:03:11 -- scripts/common.sh@337 -- # local 'op=<' 00:16:50.578 21:03:11 -- scripts/common.sh@339 -- # ver1_l=2 00:16:50.578 21:03:11 -- scripts/common.sh@340 -- # ver2_l=1 00:16:50.578 21:03:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:50.578 21:03:11 -- scripts/common.sh@343 -- # case "$op" in 00:16:50.578 21:03:11 -- scripts/common.sh@344 -- # : 1 00:16:50.578 21:03:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:50.578 21:03:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.578 21:03:11 -- scripts/common.sh@364 -- # decimal 1 00:16:50.578 21:03:11 -- scripts/common.sh@352 -- # local d=1 00:16:50.578 21:03:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.578 21:03:11 -- scripts/common.sh@354 -- # echo 1 00:16:50.578 21:03:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:50.578 21:03:11 -- scripts/common.sh@365 -- # decimal 2 00:16:50.578 21:03:11 -- scripts/common.sh@352 -- # local d=2 00:16:50.578 21:03:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.578 21:03:11 -- scripts/common.sh@354 -- # echo 2 00:16:50.578 21:03:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:50.578 21:03:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:50.578 21:03:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:50.578 21:03:11 -- scripts/common.sh@367 -- # return 0 00:16:50.839 21:03:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.839 21:03:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:50.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.839 --rc genhtml_branch_coverage=1 00:16:50.839 --rc genhtml_function_coverage=1 00:16:50.839 --rc genhtml_legend=1 00:16:50.839 --rc geninfo_all_blocks=1 00:16:50.839 --rc geninfo_unexecuted_blocks=1 00:16:50.839 00:16:50.839 ' 00:16:50.839 21:03:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:50.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.839 --rc genhtml_branch_coverage=1 00:16:50.839 --rc genhtml_function_coverage=1 00:16:50.839 --rc genhtml_legend=1 00:16:50.839 --rc geninfo_all_blocks=1 00:16:50.839 --rc geninfo_unexecuted_blocks=1 00:16:50.839 00:16:50.839 ' 00:16:50.839 21:03:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:50.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.839 --rc genhtml_branch_coverage=1 00:16:50.839 --rc genhtml_function_coverage=1 00:16:50.839 --rc genhtml_legend=1 00:16:50.839 --rc geninfo_all_blocks=1 00:16:50.839 --rc geninfo_unexecuted_blocks=1 00:16:50.839 00:16:50.839 ' 00:16:50.839 21:03:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:50.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.839 --rc genhtml_branch_coverage=1 00:16:50.839 --rc genhtml_function_coverage=1 00:16:50.839 --rc genhtml_legend=1 00:16:50.839 --rc geninfo_all_blocks=1 00:16:50.839 --rc geninfo_unexecuted_blocks=1 00:16:50.839 00:16:50.839 ' 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:50.839 21:03:11 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:50.839 21:03:11 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.839 21:03:11 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.839 21:03:11 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:50.839 21:03:11 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:50.839 21:03:11 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.839 21:03:11 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:50.839 21:03:11 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:50.839 21:03:11 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.839 21:03:11 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.839 21:03:11 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:50.839 21:03:11 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:50.839 21:03:11 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:50.839 21:03:11 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:50.839 21:03:11 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:50.839 21:03:11 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:50.839 21:03:11 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.839 21:03:11 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.839 21:03:11 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:50.839 21:03:11 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:50.839 21:03:11 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:50.839 21:03:11 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:50.839 21:03:11 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:50.839 21:03:11 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:50.839 21:03:11 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:50.839 21:03:11 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:50.839 21:03:11 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.839 21:03:11 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@13 -- # use_append= 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:50.839 21:03:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.839 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=72189 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:50.839 21:03:11 -- ftl/bdevperf.sh@22 -- # waitforlisten 72189 00:16:50.839 21:03:11 -- common/autotest_common.sh@829 -- # '[' -z 72189 ']' 00:16:50.839 21:03:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.839 21:03:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.839 21:03:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.839 21:03:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.839 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:16:50.839 [2024-12-08 21:03:11.725482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:50.839 [2024-12-08 21:03:11.725598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72189 ] 00:16:51.099 [2024-12-08 21:03:11.885539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.099 [2024-12-08 21:03:12.110468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.668 21:03:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.668 21:03:12 -- common/autotest_common.sh@862 -- # return 0 00:16:51.668 21:03:12 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:51.668 21:03:12 -- ftl/common.sh@54 -- # local name=nvme0 00:16:51.668 21:03:12 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:51.668 21:03:12 -- ftl/common.sh@56 -- # local size=103424 00:16:51.668 21:03:12 -- ftl/common.sh@59 -- # local base_bdev 00:16:51.928 21:03:12 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:52.188 21:03:13 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:52.188 21:03:13 -- ftl/common.sh@62 -- # local base_size 00:16:52.188 21:03:13 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:52.188 21:03:13 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:52.188 21:03:13 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:52.188 21:03:13 -- common/autotest_common.sh@1369 -- # local bs 00:16:52.188 21:03:13 -- common/autotest_common.sh@1370 -- # local nb 00:16:52.188 21:03:13 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:52.447 21:03:13 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:52.447 { 00:16:52.447 "name": "nvme0n1", 00:16:52.447 "aliases": [ 00:16:52.447 "0590c47d-b4c6-4a17-8bbf-686424140a30" 00:16:52.447 ], 00:16:52.447 "product_name": "NVMe disk", 00:16:52.447 "block_size": 4096, 00:16:52.447 "num_blocks": 1310720, 00:16:52.447 "uuid": "0590c47d-b4c6-4a17-8bbf-686424140a30", 00:16:52.447 "assigned_rate_limits": { 00:16:52.447 "rw_ios_per_sec": 0, 00:16:52.447 "rw_mbytes_per_sec": 0, 00:16:52.447 "r_mbytes_per_sec": 0, 00:16:52.447 "w_mbytes_per_sec": 0 00:16:52.447 }, 00:16:52.447 "claimed": true, 00:16:52.447 "claim_type": "read_many_write_one", 00:16:52.447 "zoned": false, 00:16:52.447 "supported_io_types": { 00:16:52.447 "read": true, 00:16:52.447 "write": true, 00:16:52.447 "unmap": true, 00:16:52.447 "write_zeroes": true, 00:16:52.447 "flush": true, 00:16:52.447 "reset": true, 00:16:52.447 "compare": true, 00:16:52.447 "compare_and_write": false, 00:16:52.447 "abort": true, 00:16:52.447 "nvme_admin": true, 00:16:52.447 "nvme_io": true 00:16:52.447 }, 00:16:52.447 "driver_specific": { 00:16:52.447 "nvme": [ 00:16:52.447 { 00:16:52.447 "pci_address": "0000:00:07.0", 00:16:52.447 "trid": { 00:16:52.447 "trtype": "PCIe", 00:16:52.447 "traddr": "0000:00:07.0" 00:16:52.447 }, 00:16:52.447 "ctrlr_data": { 00:16:52.448 "cntlid": 0, 00:16:52.448 "vendor_id": "0x1b36", 00:16:52.448 "model_number": "QEMU NVMe Ctrl", 00:16:52.448 "serial_number": "12341", 00:16:52.448 "firmware_revision": "8.0.0", 00:16:52.448 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:52.448 "oacs": { 00:16:52.448 "security": 0, 00:16:52.448 "format": 1, 00:16:52.448 "firmware": 0, 00:16:52.448 "ns_manage": 1 00:16:52.448 }, 00:16:52.448 "multi_ctrlr": false, 00:16:52.448 "ana_reporting": false 00:16:52.448 }, 00:16:52.448 "vs": { 00:16:52.448 "nvme_version": "1.4" 00:16:52.448 }, 00:16:52.448 "ns_data": { 00:16:52.448 "id": 1, 00:16:52.448 "can_share": false 00:16:52.448 } 00:16:52.448 } 00:16:52.448 ], 00:16:52.448 "mp_policy": "active_passive" 00:16:52.448 } 00:16:52.448 } 00:16:52.448 ]' 00:16:52.448 21:03:13 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:52.448 21:03:13 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:52.448 21:03:13 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:52.448 21:03:13 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:52.448 21:03:13 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:52.448 21:03:13 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:52.448 21:03:13 -- ftl/common.sh@63 -- # base_size=5120 00:16:52.448 21:03:13 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:52.448 21:03:13 -- ftl/common.sh@67 -- # clear_lvols 00:16:52.448 21:03:13 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:52.448 21:03:13 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:52.707 21:03:13 -- ftl/common.sh@28 -- # stores=5bf04795-1249-4f91-9038-76af846280d7 00:16:52.707 21:03:13 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:52.707 21:03:13 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5bf04795-1249-4f91-9038-76af846280d7 00:16:52.967 21:03:13 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:53.226 21:03:14 -- ftl/common.sh@68 -- # lvs=2718ba02-4ef6-4392-b5e2-ce81a31cf9a2 00:16:53.226 21:03:14 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2718ba02-4ef6-4392-b5e2-ce81a31cf9a2 00:16:53.485 21:03:14 -- ftl/bdevperf.sh@23 -- # split_bdev=c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:53.485 21:03:14 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:53.485 21:03:14 -- ftl/common.sh@35 -- # local name=nvc0 00:16:53.485 21:03:14 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:53.485 21:03:14 -- ftl/common.sh@37 -- # local base_bdev=c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:53.485 21:03:14 -- ftl/common.sh@38 -- # local cache_size= 00:16:53.486 21:03:14 -- ftl/common.sh@41 -- # get_bdev_size c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:53.486 21:03:14 -- common/autotest_common.sh@1367 -- # local bdev_name=c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:53.486 21:03:14 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:53.486 21:03:14 -- common/autotest_common.sh@1369 -- # local bs 00:16:53.486 21:03:14 -- common/autotest_common.sh@1370 -- # local nb 00:16:53.486 21:03:14 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:53.744 21:03:14 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:53.744 { 00:16:53.744 "name": "c050d028-4c1f-4ec6-9e6f-30ecc4203cc8", 00:16:53.744 "aliases": [ 00:16:53.744 "lvs/nvme0n1p0" 00:16:53.744 ], 00:16:53.744 "product_name": "Logical Volume", 00:16:53.744 "block_size": 4096, 00:16:53.744 "num_blocks": 26476544, 00:16:53.744 "uuid": "c050d028-4c1f-4ec6-9e6f-30ecc4203cc8", 00:16:53.744 "assigned_rate_limits": { 00:16:53.744 "rw_ios_per_sec": 0, 00:16:53.744 "rw_mbytes_per_sec": 0, 00:16:53.745 "r_mbytes_per_sec": 0, 00:16:53.745 "w_mbytes_per_sec": 0 00:16:53.745 }, 00:16:53.745 "claimed": false, 00:16:53.745 "zoned": false, 00:16:53.745 "supported_io_types": { 00:16:53.745 "read": true, 00:16:53.745 "write": true, 00:16:53.745 "unmap": true, 00:16:53.745 "write_zeroes": true, 00:16:53.745 "flush": false, 00:16:53.745 "reset": true, 00:16:53.745 "compare": false, 00:16:53.745 "compare_and_write": false, 00:16:53.745 "abort": false, 00:16:53.745 "nvme_admin": false, 00:16:53.745 "nvme_io": false 00:16:53.745 }, 00:16:53.745 "driver_specific": { 00:16:53.745 "lvol": { 00:16:53.745 "lvol_store_uuid": "2718ba02-4ef6-4392-b5e2-ce81a31cf9a2", 00:16:53.745 "base_bdev": "nvme0n1", 00:16:53.745 "thin_provision": true, 00:16:53.745 "snapshot": false, 00:16:53.745 "clone": false, 00:16:53.745 "esnap_clone": false 00:16:53.745 } 00:16:53.745 } 00:16:53.745 } 00:16:53.745 ]' 00:16:53.745 21:03:14 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:53.745 21:03:14 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:53.745 21:03:14 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:53.745 21:03:14 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:53.745 21:03:14 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:53.745 21:03:14 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:53.745 21:03:14 -- ftl/common.sh@41 -- # local base_size=5171 00:16:53.745 21:03:14 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:53.745 21:03:14 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:54.004 21:03:14 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:54.004 21:03:14 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:54.004 21:03:14 -- ftl/common.sh@48 -- # get_bdev_size c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:54.004 21:03:14 -- common/autotest_common.sh@1367 -- # local bdev_name=c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:54.004 21:03:14 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:54.004 21:03:14 -- common/autotest_common.sh@1369 -- # local bs 00:16:54.004 21:03:14 -- common/autotest_common.sh@1370 -- # local nb 00:16:54.004 21:03:14 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:54.265 21:03:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:54.265 { 00:16:54.265 "name": "c050d028-4c1f-4ec6-9e6f-30ecc4203cc8", 00:16:54.265 "aliases": [ 00:16:54.265 "lvs/nvme0n1p0" 00:16:54.265 ], 00:16:54.265 "product_name": "Logical Volume", 00:16:54.265 "block_size": 4096, 00:16:54.265 "num_blocks": 26476544, 00:16:54.265 "uuid": "c050d028-4c1f-4ec6-9e6f-30ecc4203cc8", 00:16:54.265 "assigned_rate_limits": { 00:16:54.265 "rw_ios_per_sec": 0, 00:16:54.265 "rw_mbytes_per_sec": 0, 00:16:54.265 "r_mbytes_per_sec": 0, 00:16:54.265 "w_mbytes_per_sec": 0 00:16:54.265 }, 00:16:54.265 "claimed": false, 00:16:54.265 "zoned": false, 00:16:54.265 "supported_io_types": { 00:16:54.265 "read": true, 00:16:54.265 "write": true, 00:16:54.265 "unmap": true, 00:16:54.265 "write_zeroes": true, 00:16:54.265 "flush": false, 00:16:54.265 "reset": true, 00:16:54.265 "compare": false, 00:16:54.265 "compare_and_write": false, 00:16:54.265 "abort": false, 00:16:54.265 "nvme_admin": false, 00:16:54.265 "nvme_io": false 00:16:54.265 }, 00:16:54.265 "driver_specific": { 00:16:54.265 "lvol": { 00:16:54.265 "lvol_store_uuid": "2718ba02-4ef6-4392-b5e2-ce81a31cf9a2", 00:16:54.265 "base_bdev": "nvme0n1", 00:16:54.265 "thin_provision": true, 00:16:54.265 "snapshot": false, 00:16:54.265 "clone": false, 00:16:54.265 "esnap_clone": false 00:16:54.265 } 00:16:54.265 } 00:16:54.265 } 00:16:54.265 ]' 00:16:54.265 21:03:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:54.265 21:03:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:54.265 21:03:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:54.265 21:03:15 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:54.265 21:03:15 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:54.265 21:03:15 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:54.265 21:03:15 -- ftl/common.sh@48 -- # cache_size=5171 00:16:54.265 21:03:15 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:54.523 21:03:15 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:54.523 21:03:15 -- ftl/bdevperf.sh@26 -- # get_bdev_size c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:54.523 21:03:15 -- common/autotest_common.sh@1367 -- # local bdev_name=c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:54.523 21:03:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:54.523 21:03:15 -- common/autotest_common.sh@1369 -- # local bs 00:16:54.523 21:03:15 -- common/autotest_common.sh@1370 -- # local nb 00:16:54.523 21:03:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 00:16:54.781 21:03:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:54.781 { 00:16:54.781 "name": "c050d028-4c1f-4ec6-9e6f-30ecc4203cc8", 00:16:54.781 "aliases": [ 00:16:54.781 "lvs/nvme0n1p0" 00:16:54.781 ], 00:16:54.781 "product_name": "Logical Volume", 00:16:54.781 "block_size": 4096, 00:16:54.781 "num_blocks": 26476544, 00:16:54.781 "uuid": "c050d028-4c1f-4ec6-9e6f-30ecc4203cc8", 00:16:54.781 "assigned_rate_limits": { 00:16:54.781 "rw_ios_per_sec": 0, 00:16:54.781 "rw_mbytes_per_sec": 0, 00:16:54.781 "r_mbytes_per_sec": 0, 00:16:54.781 "w_mbytes_per_sec": 0 00:16:54.781 }, 00:16:54.781 "claimed": false, 00:16:54.781 "zoned": false, 00:16:54.781 "supported_io_types": { 00:16:54.781 "read": true, 00:16:54.781 "write": true, 00:16:54.781 "unmap": true, 00:16:54.781 "write_zeroes": true, 00:16:54.781 "flush": false, 00:16:54.781 "reset": true, 00:16:54.781 "compare": false, 00:16:54.781 "compare_and_write": false, 00:16:54.781 "abort": false, 00:16:54.781 "nvme_admin": false, 00:16:54.781 "nvme_io": false 00:16:54.781 }, 00:16:54.781 "driver_specific": { 00:16:54.781 "lvol": { 00:16:54.781 "lvol_store_uuid": "2718ba02-4ef6-4392-b5e2-ce81a31cf9a2", 00:16:54.781 "base_bdev": "nvme0n1", 00:16:54.781 "thin_provision": true, 00:16:54.781 "snapshot": false, 00:16:54.781 "clone": false, 00:16:54.781 "esnap_clone": false 00:16:54.781 } 00:16:54.781 } 00:16:54.781 } 00:16:54.781 ]' 00:16:54.781 21:03:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:54.781 21:03:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:54.781 21:03:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:55.042 21:03:15 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:55.042 21:03:15 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:55.042 21:03:15 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:55.042 21:03:15 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:55.042 21:03:15 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c050d028-4c1f-4ec6-9e6f-30ecc4203cc8 -c nvc0n1p0 --l2p_dram_limit 20 00:16:55.042 [2024-12-08 21:03:16.023254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.042 [2024-12-08 21:03:16.023304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:55.042 [2024-12-08 21:03:16.023340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:55.042 [2024-12-08 21:03:16.023351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.042 [2024-12-08 21:03:16.023425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.042 [2024-12-08 21:03:16.023441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.042 [2024-12-08 21:03:16.023454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:55.042 [2024-12-08 21:03:16.023479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.042 [2024-12-08 21:03:16.023504] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:55.042 [2024-12-08 21:03:16.024535] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:55.042 [2024-12-08 21:03:16.024579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.043 [2024-12-08 21:03:16.024592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.043 [2024-12-08 21:03:16.024605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:16:55.043 [2024-12-08 21:03:16.024630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.043 [2024-12-08 21:03:16.024783] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dd76519a-0f31-42b3-a95c-63398f2b1247 00:16:55.043 [2024-12-08 21:03:16.025767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.043 [2024-12-08 21:03:16.025804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:55.043 [2024-12-08 21:03:16.025819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:55.043 [2024-12-08 21:03:16.025831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.043 [2024-12-08 21:03:16.029842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.043 [2024-12-08 21:03:16.029887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.043 [2024-12-08 21:03:16.029917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.969 ms 00:16:55.043 [2024-12-08 21:03:16.029928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.043 [2024-12-08 21:03:16.030027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.043 [2024-12-08 21:03:16.030048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.043 [2024-12-08 21:03:16.030060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:55.043 [2024-12-08 21:03:16.030076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.043 [2024-12-08 21:03:16.030175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.043 [2024-12-08 21:03:16.030195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:55.043 [2024-12-08 21:03:16.030209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:55.043 [2024-12-08 21:03:16.030237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.043 [2024-12-08 21:03:16.030268] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:55.043 [2024-12-08 21:03:16.034225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.043 [2024-12-08 21:03:16.034272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.043 [2024-12-08 21:03:16.034304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.964 ms 00:16:55.043 [2024-12-08 21:03:16.034315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.043 [2024-12-08 21:03:16.034355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.043 [2024-12-08 21:03:16.034369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:55.043 [2024-12-08 21:03:16.034382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:55.043 [2024-12-08 21:03:16.034392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.043 [2024-12-08 21:03:16.034442] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:55.043 [2024-12-08 21:03:16.034586] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:55.043 [2024-12-08 21:03:16.034610] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:55.043 [2024-12-08 21:03:16.034624] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:55.043 [2024-12-08 21:03:16.034639] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:55.043 [2024-12-08 21:03:16.034651] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:55.043 [2024-12-08 21:03:16.034663] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:55.043 [2024-12-08 21:03:16.034673] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:55.043 [2024-12-08 21:03:16.034690] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:55.043 [2024-12-08 21:03:16.034699] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:55.043 [2024-12-08 21:03:16.034712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.043 [2024-12-08 21:03:16.034722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:55.043 [2024-12-08 21:03:16.034735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:55.043 [2024-12-08 21:03:16.034745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.043 [2024-12-08 21:03:16.034811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.043 [2024-12-08 21:03:16.034825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:55.043 [2024-12-08 21:03:16.034838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:55.043 [2024-12-08 21:03:16.034848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.043 [2024-12-08 21:03:16.034933] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:55.043 [2024-12-08 21:03:16.034949] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:55.043 [2024-12-08 21:03:16.034962] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.043 [2024-12-08 21:03:16.034982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.043 [2024-12-08 21:03:16.034995] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:55.043 [2024-12-08 21:03:16.035005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:55.043 [2024-12-08 21:03:16.035016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:55.043 [2024-12-08 21:03:16.035026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:55.043 [2024-12-08 21:03:16.035037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:55.043 [2024-12-08 21:03:16.035046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.043 [2024-12-08 21:03:16.035061] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:55.043 [2024-12-08 21:03:16.035071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:55.043 [2024-12-08 21:03:16.035098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.043 [2024-12-08 21:03:16.035108] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:55.043 [2024-12-08 21:03:16.035134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:55.043 [2024-12-08 21:03:16.035147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.043 [2024-12-08 21:03:16.035161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:55.043 [2024-12-08 21:03:16.035171] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:55.043 [2024-12-08 21:03:16.035183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.043 [2024-12-08 21:03:16.035192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:55.043 [2024-12-08 21:03:16.035204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:55.043 [2024-12-08 21:03:16.035214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:55.043 [2024-12-08 21:03:16.035225] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:55.043 [2024-12-08 21:03:16.035235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:55.043 [2024-12-08 21:03:16.035246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:55.043 [2024-12-08 21:03:16.035255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:55.043 [2024-12-08 21:03:16.035267] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:55.043 [2024-12-08 21:03:16.035276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:55.043 [2024-12-08 21:03:16.035459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:55.043 [2024-12-08 21:03:16.035480] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:55.043 [2024-12-08 21:03:16.035493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:55.043 [2024-12-08 21:03:16.035503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:55.043 [2024-12-08 21:03:16.035516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:55.043 [2024-12-08 21:03:16.035526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:55.043 [2024-12-08 21:03:16.035536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:55.043 [2024-12-08 21:03:16.035546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:55.043 [2024-12-08 21:03:16.035559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.043 [2024-12-08 21:03:16.035568] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:55.043 [2024-12-08 21:03:16.035579] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:55.043 [2024-12-08 21:03:16.035588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.043 [2024-12-08 21:03:16.035599] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:55.043 [2024-12-08 21:03:16.035610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:55.043 [2024-12-08 21:03:16.035621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.043 [2024-12-08 21:03:16.035631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.043 [2024-12-08 21:03:16.035644] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:55.043 [2024-12-08 21:03:16.035653] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:55.043 [2024-12-08 21:03:16.035665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:55.043 [2024-12-08 21:03:16.035674] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:55.043 [2024-12-08 21:03:16.035687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:55.043 [2024-12-08 21:03:16.035696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:55.043 [2024-12-08 21:03:16.035709] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:55.043 [2024-12-08 21:03:16.035722] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.043 [2024-12-08 21:03:16.035738] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:55.043 [2024-12-08 21:03:16.035748] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:55.043 [2024-12-08 21:03:16.035760] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:55.044 [2024-12-08 21:03:16.035770] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:55.044 [2024-12-08 21:03:16.035781] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:55.044 [2024-12-08 21:03:16.035791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:55.044 [2024-12-08 21:03:16.035803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:55.044 [2024-12-08 21:03:16.035813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:55.044 [2024-12-08 21:03:16.035825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:55.044 [2024-12-08 21:03:16.035835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:55.044 [2024-12-08 21:03:16.035848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:55.044 [2024-12-08 21:03:16.035859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:55.044 [2024-12-08 21:03:16.035872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:55.044 [2024-12-08 21:03:16.035882] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:55.044 [2024-12-08 21:03:16.035895] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.044 [2024-12-08 21:03:16.035907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:55.044 [2024-12-08 21:03:16.035919] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:55.044 [2024-12-08 21:03:16.035929] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:55.044 [2024-12-08 21:03:16.035940] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:55.044 [2024-12-08 21:03:16.035951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.044 [2024-12-08 21:03:16.035974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:55.044 [2024-12-08 21:03:16.035985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:16:55.044 [2024-12-08 21:03:16.035996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.044 [2024-12-08 21:03:16.051271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.044 [2024-12-08 21:03:16.051331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:55.044 [2024-12-08 21:03:16.051348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.232 ms 00:16:55.044 [2024-12-08 21:03:16.051360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.044 [2024-12-08 21:03:16.051442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.044 [2024-12-08 21:03:16.051461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:55.044 [2024-12-08 21:03:16.051472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:55.044 [2024-12-08 21:03:16.051483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.322 [2024-12-08 21:03:16.096565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.322 [2024-12-08 21:03:16.096623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:55.322 [2024-12-08 21:03:16.096657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.995 ms 00:16:55.322 [2024-12-08 21:03:16.096671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.322 [2024-12-08 21:03:16.096719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.322 [2024-12-08 21:03:16.096741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:55.322 [2024-12-08 21:03:16.096754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:55.322 [2024-12-08 21:03:16.096773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.322 [2024-12-08 21:03:16.097240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.322 [2024-12-08 21:03:16.097287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:55.322 [2024-12-08 21:03:16.097302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:16:55.322 [2024-12-08 21:03:16.097315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.322 [2024-12-08 21:03:16.097532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.322 [2024-12-08 21:03:16.097566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:55.322 [2024-12-08 21:03:16.097582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:16:55.322 [2024-12-08 21:03:16.097596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.322 [2024-12-08 21:03:16.113095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.322 [2024-12-08 21:03:16.113145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:55.322 [2024-12-08 21:03:16.113179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.476 ms 00:16:55.322 [2024-12-08 21:03:16.113191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.322 [2024-12-08 21:03:16.123960] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:55.322 [2024-12-08 21:03:16.128438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.322 [2024-12-08 21:03:16.128485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:55.322 [2024-12-08 21:03:16.128533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.146 ms 00:16:55.322 [2024-12-08 21:03:16.128544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.322 [2024-12-08 21:03:16.199354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.322 [2024-12-08 21:03:16.199427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:55.322 [2024-12-08 21:03:16.199463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.759 ms 00:16:55.322 [2024-12-08 21:03:16.199474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.322 [2024-12-08 21:03:16.199537] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:55.322 [2024-12-08 21:03:16.199557] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:57.870 [2024-12-08 21:03:18.859662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.870 [2024-12-08 21:03:18.859730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:57.870 [2024-12-08 21:03:18.859767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2660.141 ms 00:16:57.870 [2024-12-08 21:03:18.859778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.870 [2024-12-08 21:03:18.859977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.870 [2024-12-08 21:03:18.859994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:57.870 [2024-12-08 21:03:18.860008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:16:57.870 [2024-12-08 21:03:18.860018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.870 [2024-12-08 21:03:18.888205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.870 [2024-12-08 21:03:18.888260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:57.870 [2024-12-08 21:03:18.888281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.050 ms 00:16:57.870 [2024-12-08 21:03:18.888324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.130 [2024-12-08 21:03:18.914614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.130 [2024-12-08 21:03:18.914667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:58.130 [2024-12-08 21:03:18.914704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.225 ms 00:16:58.130 [2024-12-08 21:03:18.914715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.130 [2024-12-08 21:03:18.915226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.130 [2024-12-08 21:03:18.915259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.130 [2024-12-08 21:03:18.915276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:16:58.130 [2024-12-08 21:03:18.915288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.130 [2024-12-08 21:03:18.979501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.130 [2024-12-08 21:03:18.979539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:58.130 [2024-12-08 21:03:18.979573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.135 ms 00:16:58.130 [2024-12-08 21:03:18.979585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.130 [2024-12-08 21:03:19.004717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.130 [2024-12-08 21:03:19.004755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:58.130 [2024-12-08 21:03:19.004788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.084 ms 00:16:58.130 [2024-12-08 21:03:19.004798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.130 [2024-12-08 21:03:19.006609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.130 [2024-12-08 21:03:19.006661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:58.130 [2024-12-08 21:03:19.006678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.769 ms 00:16:58.130 [2024-12-08 21:03:19.006691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.130 [2024-12-08 21:03:19.031334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.130 [2024-12-08 21:03:19.031370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:58.130 [2024-12-08 21:03:19.031403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.598 ms 00:16:58.130 [2024-12-08 21:03:19.031413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.130 [2024-12-08 21:03:19.031462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.130 [2024-12-08 21:03:19.031479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.130 [2024-12-08 21:03:19.031495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:58.130 [2024-12-08 21:03:19.031505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.130 [2024-12-08 21:03:19.031592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.130 [2024-12-08 21:03:19.031608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:58.130 [2024-12-08 21:03:19.031652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:58.130 [2024-12-08 21:03:19.031662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.130 [2024-12-08 21:03:19.032885] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3009.015 ms, result 0 00:16:58.130 { 00:16:58.130 "name": "ftl0", 00:16:58.130 "uuid": "dd76519a-0f31-42b3-a95c-63398f2b1247" 00:16:58.130 } 00:16:58.130 21:03:19 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:58.130 21:03:19 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:58.130 21:03:19 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:58.390 21:03:19 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:58.390 [2024-12-08 21:03:19.396804] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:58.390 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:58.390 Zero copy mechanism will not be used. 00:16:58.390 Running I/O for 4 seconds... 00:17:02.592 00:17:02.592 Latency(us) 00:17:02.592 [2024-12-08T21:03:23.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.593 [2024-12-08T21:03:23.636Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:02.593 ftl0 : 4.00 1675.95 111.29 0.00 0.00 627.00 242.04 10843.23 00:17:02.593 [2024-12-08T21:03:23.636Z] =================================================================================================================== 00:17:02.593 [2024-12-08T21:03:23.636Z] Total : 1675.95 111.29 0.00 0.00 627.00 242.04 10843.23 00:17:02.593 [2024-12-08 21:03:23.405152] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:02.593 0 00:17:02.593 21:03:23 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:02.593 [2024-12-08 21:03:23.556934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:02.593 Running I/O for 4 seconds... 00:17:06.786 00:17:06.786 Latency(us) 00:17:06.786 [2024-12-08T21:03:27.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.786 [2024-12-08T21:03:27.829Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.786 ftl0 : 4.02 7344.93 28.69 0.00 0.00 17375.35 312.79 30265.72 00:17:06.786 [2024-12-08T21:03:27.829Z] =================================================================================================================== 00:17:06.786 [2024-12-08T21:03:27.829Z] Total : 7344.93 28.69 0.00 0.00 17375.35 0.00 30265.72 00:17:06.786 [2024-12-08 21:03:27.585428] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:06.786 0 00:17:06.786 21:03:27 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:06.786 [2024-12-08 21:03:27.735878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:06.786 Running I/O for 4 seconds... 00:17:10.978 00:17:10.979 Latency(us) 00:17:10.979 [2024-12-08T21:03:32.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.979 [2024-12-08T21:03:32.022Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.979 Verification LBA range: start 0x0 length 0x1400000 00:17:10.979 ftl0 : 4.01 7639.70 29.84 0.00 0.00 16706.61 262.52 19422.49 00:17:10.979 [2024-12-08T21:03:32.022Z] =================================================================================================================== 00:17:10.979 [2024-12-08T21:03:32.022Z] Total : 7639.70 29.84 0.00 0.00 16706.61 0.00 19422.49 00:17:10.979 [2024-12-08 21:03:31.759007] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:10.979 0 00:17:10.979 21:03:31 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:10.979 [2024-12-08 21:03:32.014333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.979 [2024-12-08 21:03:32.014379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:10.979 [2024-12-08 21:03:32.014416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.979 [2024-12-08 21:03:32.014427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.979 [2024-12-08 21:03:32.014460] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:10.979 [2024-12-08 21:03:32.017638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.979 [2024-12-08 21:03:32.017673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:10.979 [2024-12-08 21:03:32.017702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.156 ms 00:17:10.979 [2024-12-08 21:03:32.017716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.979 [2024-12-08 21:03:32.019362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.979 [2024-12-08 21:03:32.019419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:10.979 [2024-12-08 21:03:32.019436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:17:10.979 [2024-12-08 21:03:32.019464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.238 [2024-12-08 21:03:32.187531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.238 [2024-12-08 21:03:32.187618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:11.238 [2024-12-08 21:03:32.187639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 168.045 ms 00:17:11.238 [2024-12-08 21:03:32.187652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.238 [2024-12-08 21:03:32.193252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.238 [2024-12-08 21:03:32.193316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:11.238 [2024-12-08 21:03:32.193330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.561 ms 00:17:11.238 [2024-12-08 21:03:32.193342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.238 [2024-12-08 21:03:32.217553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.238 [2024-12-08 21:03:32.217594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:11.238 [2024-12-08 21:03:32.217624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.137 ms 00:17:11.238 [2024-12-08 21:03:32.217639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.238 [2024-12-08 21:03:32.232887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.238 [2024-12-08 21:03:32.232927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:11.238 [2024-12-08 21:03:32.232957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.210 ms 00:17:11.238 [2024-12-08 21:03:32.232970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.238 [2024-12-08 21:03:32.233177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.238 [2024-12-08 21:03:32.233202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:11.238 [2024-12-08 21:03:32.233215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:17:11.238 [2024-12-08 21:03:32.233227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.238 [2024-12-08 21:03:32.257900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.239 [2024-12-08 21:03:32.257940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:11.239 [2024-12-08 21:03:32.257971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.653 ms 00:17:11.239 [2024-12-08 21:03:32.257982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.499 [2024-12-08 21:03:32.282865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.499 [2024-12-08 21:03:32.282939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:11.499 [2024-12-08 21:03:32.282954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.844 ms 00:17:11.499 [2024-12-08 21:03:32.282969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.499 [2024-12-08 21:03:32.307097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.499 [2024-12-08 21:03:32.307166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:11.499 [2024-12-08 21:03:32.307182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.088 ms 00:17:11.499 [2024-12-08 21:03:32.307193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.499 [2024-12-08 21:03:32.330865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.499 [2024-12-08 21:03:32.330905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:11.499 [2024-12-08 21:03:32.330935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.594 ms 00:17:11.499 [2024-12-08 21:03:32.330946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.499 [2024-12-08 21:03:32.330984] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:11.499 [2024-12-08 21:03:32.331008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:11.499 [2024-12-08 21:03:32.331283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.331997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:11.500 [2024-12-08 21:03:32.332196] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:11.500 [2024-12-08 21:03:32.332206] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dd76519a-0f31-42b3-a95c-63398f2b1247 00:17:11.500 [2024-12-08 21:03:32.332220] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:11.500 [2024-12-08 21:03:32.332230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:11.500 [2024-12-08 21:03:32.332240] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:11.500 [2024-12-08 21:03:32.332250] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:11.500 [2024-12-08 21:03:32.332261] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:11.500 [2024-12-08 21:03:32.332273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:11.500 [2024-12-08 21:03:32.332309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:11.500 [2024-12-08 21:03:32.332335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:11.500 [2024-12-08 21:03:32.332346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:11.500 [2024-12-08 21:03:32.332356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.500 [2024-12-08 21:03:32.332368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:11.501 [2024-12-08 21:03:32.332380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:17:11.501 [2024-12-08 21:03:32.332392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.345821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.501 [2024-12-08 21:03:32.345875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:11.501 [2024-12-08 21:03:32.345890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.389 ms 00:17:11.501 [2024-12-08 21:03:32.345907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.346181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.501 [2024-12-08 21:03:32.346210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:11.501 [2024-12-08 21:03:32.346239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:17:11.501 [2024-12-08 21:03:32.346251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.384182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.384224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:11.501 [2024-12-08 21:03:32.384257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.384268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.384362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.384382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:11.501 [2024-12-08 21:03:32.384393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.384405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.384520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.384544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:11.501 [2024-12-08 21:03:32.384557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.384574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.384602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.384631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:11.501 [2024-12-08 21:03:32.384657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.384669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.459413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.459485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:11.501 [2024-12-08 21:03:32.459501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.459517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.489234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.489273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:11.501 [2024-12-08 21:03:32.489304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.489315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.489393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.489413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:11.501 [2024-12-08 21:03:32.489424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.489438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.489489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.489522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:11.501 [2024-12-08 21:03:32.489533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.489560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.489663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.489684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:11.501 [2024-12-08 21:03:32.489696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.489708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.489751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.489772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:11.501 [2024-12-08 21:03:32.489783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.489795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.489835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.489857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:11.501 [2024-12-08 21:03:32.489869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.489884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.489939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.501 [2024-12-08 21:03:32.489956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:11.501 [2024-12-08 21:03:32.489968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.501 [2024-12-08 21:03:32.489979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.501 [2024-12-08 21:03:32.490155] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 475.748 ms, result 0 00:17:11.501 true 00:17:11.501 21:03:32 -- ftl/bdevperf.sh@37 -- # killprocess 72189 00:17:11.501 21:03:32 -- common/autotest_common.sh@936 -- # '[' -z 72189 ']' 00:17:11.501 21:03:32 -- common/autotest_common.sh@940 -- # kill -0 72189 00:17:11.501 21:03:32 -- common/autotest_common.sh@941 -- # uname 00:17:11.501 21:03:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.501 21:03:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72189 00:17:11.761 killing process with pid 72189 00:17:11.761 Received shutdown signal, test time was about 4.000000 seconds 00:17:11.761 00:17:11.761 Latency(us) 00:17:11.761 [2024-12-08T21:03:32.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.761 [2024-12-08T21:03:32.804Z] =================================================================================================================== 00:17:11.761 [2024-12-08T21:03:32.804Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:11.761 21:03:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:11.761 21:03:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:11.761 21:03:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72189' 00:17:11.761 21:03:32 -- common/autotest_common.sh@955 -- # kill 72189 00:17:11.761 21:03:32 -- common/autotest_common.sh@960 -- # wait 72189 00:17:12.699 21:03:33 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:17:12.699 21:03:33 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:12.699 21:03:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.699 21:03:33 -- common/autotest_common.sh@10 -- # set +x 00:17:12.699 21:03:33 -- ftl/bdevperf.sh@41 -- # remove_shm 00:17:12.699 Remove shared memory files 00:17:12.699 21:03:33 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:12.699 21:03:33 -- ftl/common.sh@205 -- # rm -f rm -f 00:17:12.699 21:03:33 -- ftl/common.sh@206 -- # rm -f rm -f 00:17:12.699 21:03:33 -- ftl/common.sh@207 -- # rm -f rm -f 00:17:12.699 21:03:33 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:12.699 21:03:33 -- ftl/common.sh@209 -- # rm -f rm -f 00:17:12.699 00:17:12.699 real 0m22.060s 00:17:12.699 user 0m25.428s 00:17:12.699 sys 0m0.984s 00:17:12.699 ************************************ 00:17:12.699 END TEST ftl_bdevperf 00:17:12.699 ************************************ 00:17:12.699 21:03:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:12.699 21:03:33 -- common/autotest_common.sh@10 -- # set +x 00:17:12.699 21:03:33 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:17:12.699 21:03:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:12.699 21:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:12.699 21:03:33 -- common/autotest_common.sh@10 -- # set +x 00:17:12.699 ************************************ 00:17:12.699 START TEST ftl_trim 00:17:12.699 ************************************ 00:17:12.699 21:03:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:17:12.699 * Looking for test storage... 00:17:12.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:12.699 21:03:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:12.699 21:03:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:12.699 21:03:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:12.699 21:03:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:12.699 21:03:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:12.699 21:03:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:12.699 21:03:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:12.699 21:03:33 -- scripts/common.sh@335 -- # IFS=.-: 00:17:12.699 21:03:33 -- scripts/common.sh@335 -- # read -ra ver1 00:17:12.699 21:03:33 -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.699 21:03:33 -- scripts/common.sh@336 -- # read -ra ver2 00:17:12.699 21:03:33 -- scripts/common.sh@337 -- # local 'op=<' 00:17:12.699 21:03:33 -- scripts/common.sh@339 -- # ver1_l=2 00:17:12.699 21:03:33 -- scripts/common.sh@340 -- # ver2_l=1 00:17:12.699 21:03:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:12.699 21:03:33 -- scripts/common.sh@343 -- # case "$op" in 00:17:12.699 21:03:33 -- scripts/common.sh@344 -- # : 1 00:17:12.699 21:03:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:12.699 21:03:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.699 21:03:33 -- scripts/common.sh@364 -- # decimal 1 00:17:12.699 21:03:33 -- scripts/common.sh@352 -- # local d=1 00:17:12.699 21:03:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.699 21:03:33 -- scripts/common.sh@354 -- # echo 1 00:17:12.699 21:03:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:12.699 21:03:33 -- scripts/common.sh@365 -- # decimal 2 00:17:12.699 21:03:33 -- scripts/common.sh@352 -- # local d=2 00:17:12.699 21:03:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.699 21:03:33 -- scripts/common.sh@354 -- # echo 2 00:17:12.699 21:03:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:12.699 21:03:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:12.699 21:03:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:12.699 21:03:33 -- scripts/common.sh@367 -- # return 0 00:17:12.699 21:03:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.699 21:03:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:12.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.699 --rc genhtml_branch_coverage=1 00:17:12.699 --rc genhtml_function_coverage=1 00:17:12.699 --rc genhtml_legend=1 00:17:12.699 --rc geninfo_all_blocks=1 00:17:12.699 --rc geninfo_unexecuted_blocks=1 00:17:12.699 00:17:12.699 ' 00:17:12.699 21:03:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:12.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.699 --rc genhtml_branch_coverage=1 00:17:12.699 --rc genhtml_function_coverage=1 00:17:12.699 --rc genhtml_legend=1 00:17:12.699 --rc geninfo_all_blocks=1 00:17:12.699 --rc geninfo_unexecuted_blocks=1 00:17:12.699 00:17:12.699 ' 00:17:12.699 21:03:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:12.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.699 --rc genhtml_branch_coverage=1 00:17:12.699 --rc genhtml_function_coverage=1 00:17:12.699 --rc genhtml_legend=1 00:17:12.699 --rc geninfo_all_blocks=1 00:17:12.699 --rc geninfo_unexecuted_blocks=1 00:17:12.699 00:17:12.699 ' 00:17:12.699 21:03:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:12.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.699 --rc genhtml_branch_coverage=1 00:17:12.699 --rc genhtml_function_coverage=1 00:17:12.699 --rc genhtml_legend=1 00:17:12.699 --rc geninfo_all_blocks=1 00:17:12.699 --rc geninfo_unexecuted_blocks=1 00:17:12.699 00:17:12.699 ' 00:17:12.699 21:03:33 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:12.699 21:03:33 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:12.699 21:03:33 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:12.699 21:03:33 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:12.699 21:03:33 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:12.699 21:03:33 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:12.958 21:03:33 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:12.958 21:03:33 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:12.958 21:03:33 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:12.958 21:03:33 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.958 21:03:33 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.958 21:03:33 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:12.958 21:03:33 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:12.958 21:03:33 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:12.958 21:03:33 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:12.958 21:03:33 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:12.958 21:03:33 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:12.958 21:03:33 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.958 21:03:33 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.958 21:03:33 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:12.958 21:03:33 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:12.958 21:03:33 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:12.958 21:03:33 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:12.958 21:03:33 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:12.958 21:03:33 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:12.958 21:03:33 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:12.959 21:03:33 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:12.959 21:03:33 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:12.959 21:03:33 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:12.959 21:03:33 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:12.959 21:03:33 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:17:12.959 21:03:33 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:17:12.959 21:03:33 -- ftl/trim.sh@25 -- # timeout=240 00:17:12.959 21:03:33 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:12.959 21:03:33 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:12.959 21:03:33 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:12.959 21:03:33 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:12.959 21:03:33 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:12.959 21:03:33 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:12.959 21:03:33 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:12.959 21:03:33 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:12.959 21:03:33 -- ftl/trim.sh@40 -- # svcpid=72551 00:17:12.959 21:03:33 -- ftl/trim.sh@41 -- # waitforlisten 72551 00:17:12.959 21:03:33 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:12.959 21:03:33 -- common/autotest_common.sh@829 -- # '[' -z 72551 ']' 00:17:12.959 21:03:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.959 21:03:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.959 21:03:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.959 21:03:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.959 21:03:33 -- common/autotest_common.sh@10 -- # set +x 00:17:12.959 [2024-12-08 21:03:33.863329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:12.959 [2024-12-08 21:03:33.863490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72551 ] 00:17:13.218 [2024-12-08 21:03:34.032055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:13.218 [2024-12-08 21:03:34.176253] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:13.218 [2024-12-08 21:03:34.176643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.218 [2024-12-08 21:03:34.176916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.218 [2024-12-08 21:03:34.176930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.786 21:03:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.786 21:03:34 -- common/autotest_common.sh@862 -- # return 0 00:17:13.786 21:03:34 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:13.786 21:03:34 -- ftl/common.sh@54 -- # local name=nvme0 00:17:13.786 21:03:34 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:13.786 21:03:34 -- ftl/common.sh@56 -- # local size=103424 00:17:13.786 21:03:34 -- ftl/common.sh@59 -- # local base_bdev 00:17:13.786 21:03:34 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:14.355 21:03:35 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:14.355 21:03:35 -- ftl/common.sh@62 -- # local base_size 00:17:14.355 21:03:35 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:14.355 21:03:35 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:14.355 21:03:35 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:14.355 21:03:35 -- common/autotest_common.sh@1369 -- # local bs 00:17:14.355 21:03:35 -- common/autotest_common.sh@1370 -- # local nb 00:17:14.355 21:03:35 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:14.355 21:03:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:14.355 { 00:17:14.355 "name": "nvme0n1", 00:17:14.355 "aliases": [ 00:17:14.355 "95c7dd51-0e0e-42b5-bf3e-8a524757c438" 00:17:14.355 ], 00:17:14.355 "product_name": "NVMe disk", 00:17:14.355 "block_size": 4096, 00:17:14.355 "num_blocks": 1310720, 00:17:14.355 "uuid": "95c7dd51-0e0e-42b5-bf3e-8a524757c438", 00:17:14.355 "assigned_rate_limits": { 00:17:14.355 "rw_ios_per_sec": 0, 00:17:14.355 "rw_mbytes_per_sec": 0, 00:17:14.355 "r_mbytes_per_sec": 0, 00:17:14.355 "w_mbytes_per_sec": 0 00:17:14.355 }, 00:17:14.355 "claimed": true, 00:17:14.355 "claim_type": "read_many_write_one", 00:17:14.355 "zoned": false, 00:17:14.355 "supported_io_types": { 00:17:14.355 "read": true, 00:17:14.355 "write": true, 00:17:14.355 "unmap": true, 00:17:14.355 "write_zeroes": true, 00:17:14.355 "flush": true, 00:17:14.355 "reset": true, 00:17:14.355 "compare": true, 00:17:14.355 "compare_and_write": false, 00:17:14.355 "abort": true, 00:17:14.355 "nvme_admin": true, 00:17:14.355 "nvme_io": true 00:17:14.355 }, 00:17:14.355 "driver_specific": { 00:17:14.355 "nvme": [ 00:17:14.355 { 00:17:14.355 "pci_address": "0000:00:07.0", 00:17:14.355 "trid": { 00:17:14.355 "trtype": "PCIe", 00:17:14.355 "traddr": "0000:00:07.0" 00:17:14.355 }, 00:17:14.355 "ctrlr_data": { 00:17:14.355 "cntlid": 0, 00:17:14.355 "vendor_id": "0x1b36", 00:17:14.355 "model_number": "QEMU NVMe Ctrl", 00:17:14.355 "serial_number": "12341", 00:17:14.355 "firmware_revision": "8.0.0", 00:17:14.355 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:14.355 "oacs": { 00:17:14.355 "security": 0, 00:17:14.355 "format": 1, 00:17:14.355 "firmware": 0, 00:17:14.355 "ns_manage": 1 00:17:14.355 }, 00:17:14.355 "multi_ctrlr": false, 00:17:14.355 "ana_reporting": false 00:17:14.355 }, 00:17:14.355 "vs": { 00:17:14.355 "nvme_version": "1.4" 00:17:14.355 }, 00:17:14.355 "ns_data": { 00:17:14.355 "id": 1, 00:17:14.355 "can_share": false 00:17:14.355 } 00:17:14.355 } 00:17:14.355 ], 00:17:14.355 "mp_policy": "active_passive" 00:17:14.355 } 00:17:14.355 } 00:17:14.355 ]' 00:17:14.355 21:03:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:14.613 21:03:35 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:14.613 21:03:35 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:14.613 21:03:35 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:14.613 21:03:35 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:14.613 21:03:35 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:14.613 21:03:35 -- ftl/common.sh@63 -- # base_size=5120 00:17:14.613 21:03:35 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:14.613 21:03:35 -- ftl/common.sh@67 -- # clear_lvols 00:17:14.613 21:03:35 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:14.613 21:03:35 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:14.872 21:03:35 -- ftl/common.sh@28 -- # stores=2718ba02-4ef6-4392-b5e2-ce81a31cf9a2 00:17:14.872 21:03:35 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:14.872 21:03:35 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2718ba02-4ef6-4392-b5e2-ce81a31cf9a2 00:17:15.131 21:03:35 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:15.389 21:03:36 -- ftl/common.sh@68 -- # lvs=95fdb38c-cd12-4fe2-9986-eba8b2821102 00:17:15.389 21:03:36 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 95fdb38c-cd12-4fe2-9986-eba8b2821102 00:17:15.647 21:03:36 -- ftl/trim.sh@43 -- # split_bdev=53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:15.647 21:03:36 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:15.647 21:03:36 -- ftl/common.sh@35 -- # local name=nvc0 00:17:15.647 21:03:36 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:15.647 21:03:36 -- ftl/common.sh@37 -- # local base_bdev=53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:15.647 21:03:36 -- ftl/common.sh@38 -- # local cache_size= 00:17:15.647 21:03:36 -- ftl/common.sh@41 -- # get_bdev_size 53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:15.647 21:03:36 -- common/autotest_common.sh@1367 -- # local bdev_name=53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:15.647 21:03:36 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:15.647 21:03:36 -- common/autotest_common.sh@1369 -- # local bs 00:17:15.647 21:03:36 -- common/autotest_common.sh@1370 -- # local nb 00:17:15.647 21:03:36 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:15.906 21:03:36 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:15.906 { 00:17:15.906 "name": "53503f05-8be7-4c4d-94cd-6372d4af2a40", 00:17:15.906 "aliases": [ 00:17:15.906 "lvs/nvme0n1p0" 00:17:15.906 ], 00:17:15.906 "product_name": "Logical Volume", 00:17:15.906 "block_size": 4096, 00:17:15.906 "num_blocks": 26476544, 00:17:15.906 "uuid": "53503f05-8be7-4c4d-94cd-6372d4af2a40", 00:17:15.906 "assigned_rate_limits": { 00:17:15.906 "rw_ios_per_sec": 0, 00:17:15.906 "rw_mbytes_per_sec": 0, 00:17:15.906 "r_mbytes_per_sec": 0, 00:17:15.906 "w_mbytes_per_sec": 0 00:17:15.906 }, 00:17:15.906 "claimed": false, 00:17:15.906 "zoned": false, 00:17:15.906 "supported_io_types": { 00:17:15.906 "read": true, 00:17:15.906 "write": true, 00:17:15.906 "unmap": true, 00:17:15.906 "write_zeroes": true, 00:17:15.906 "flush": false, 00:17:15.906 "reset": true, 00:17:15.906 "compare": false, 00:17:15.906 "compare_and_write": false, 00:17:15.906 "abort": false, 00:17:15.906 "nvme_admin": false, 00:17:15.906 "nvme_io": false 00:17:15.906 }, 00:17:15.906 "driver_specific": { 00:17:15.906 "lvol": { 00:17:15.906 "lvol_store_uuid": "95fdb38c-cd12-4fe2-9986-eba8b2821102", 00:17:15.906 "base_bdev": "nvme0n1", 00:17:15.906 "thin_provision": true, 00:17:15.906 "snapshot": false, 00:17:15.906 "clone": false, 00:17:15.906 "esnap_clone": false 00:17:15.906 } 00:17:15.906 } 00:17:15.906 } 00:17:15.906 ]' 00:17:15.906 21:03:36 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:15.906 21:03:36 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:15.906 21:03:36 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:15.906 21:03:36 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:15.906 21:03:36 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:15.906 21:03:36 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:15.906 21:03:36 -- ftl/common.sh@41 -- # local base_size=5171 00:17:15.906 21:03:36 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:15.906 21:03:36 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:16.164 21:03:37 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:16.164 21:03:37 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:16.164 21:03:37 -- ftl/common.sh@48 -- # get_bdev_size 53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:16.164 21:03:37 -- common/autotest_common.sh@1367 -- # local bdev_name=53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:16.164 21:03:37 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:16.164 21:03:37 -- common/autotest_common.sh@1369 -- # local bs 00:17:16.164 21:03:37 -- common/autotest_common.sh@1370 -- # local nb 00:17:16.164 21:03:37 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:16.422 21:03:37 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:16.422 { 00:17:16.422 "name": "53503f05-8be7-4c4d-94cd-6372d4af2a40", 00:17:16.422 "aliases": [ 00:17:16.422 "lvs/nvme0n1p0" 00:17:16.422 ], 00:17:16.422 "product_name": "Logical Volume", 00:17:16.422 "block_size": 4096, 00:17:16.422 "num_blocks": 26476544, 00:17:16.422 "uuid": "53503f05-8be7-4c4d-94cd-6372d4af2a40", 00:17:16.422 "assigned_rate_limits": { 00:17:16.422 "rw_ios_per_sec": 0, 00:17:16.422 "rw_mbytes_per_sec": 0, 00:17:16.422 "r_mbytes_per_sec": 0, 00:17:16.422 "w_mbytes_per_sec": 0 00:17:16.422 }, 00:17:16.422 "claimed": false, 00:17:16.422 "zoned": false, 00:17:16.422 "supported_io_types": { 00:17:16.422 "read": true, 00:17:16.422 "write": true, 00:17:16.422 "unmap": true, 00:17:16.422 "write_zeroes": true, 00:17:16.422 "flush": false, 00:17:16.422 "reset": true, 00:17:16.422 "compare": false, 00:17:16.422 "compare_and_write": false, 00:17:16.422 "abort": false, 00:17:16.422 "nvme_admin": false, 00:17:16.422 "nvme_io": false 00:17:16.422 }, 00:17:16.422 "driver_specific": { 00:17:16.422 "lvol": { 00:17:16.422 "lvol_store_uuid": "95fdb38c-cd12-4fe2-9986-eba8b2821102", 00:17:16.422 "base_bdev": "nvme0n1", 00:17:16.422 "thin_provision": true, 00:17:16.422 "snapshot": false, 00:17:16.422 "clone": false, 00:17:16.422 "esnap_clone": false 00:17:16.422 } 00:17:16.422 } 00:17:16.422 } 00:17:16.422 ]' 00:17:16.422 21:03:37 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:16.680 21:03:37 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:16.680 21:03:37 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:16.680 21:03:37 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:16.680 21:03:37 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:16.680 21:03:37 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:16.680 21:03:37 -- ftl/common.sh@48 -- # cache_size=5171 00:17:16.680 21:03:37 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:16.939 21:03:37 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:16.939 21:03:37 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:16.939 21:03:37 -- ftl/trim.sh@47 -- # get_bdev_size 53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:16.939 21:03:37 -- common/autotest_common.sh@1367 -- # local bdev_name=53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:16.939 21:03:37 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:16.939 21:03:37 -- common/autotest_common.sh@1369 -- # local bs 00:17:16.939 21:03:37 -- common/autotest_common.sh@1370 -- # local nb 00:17:16.939 21:03:37 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 53503f05-8be7-4c4d-94cd-6372d4af2a40 00:17:16.939 21:03:37 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:16.939 { 00:17:16.939 "name": "53503f05-8be7-4c4d-94cd-6372d4af2a40", 00:17:16.939 "aliases": [ 00:17:16.939 "lvs/nvme0n1p0" 00:17:16.939 ], 00:17:16.939 "product_name": "Logical Volume", 00:17:16.939 "block_size": 4096, 00:17:16.939 "num_blocks": 26476544, 00:17:16.939 "uuid": "53503f05-8be7-4c4d-94cd-6372d4af2a40", 00:17:16.939 "assigned_rate_limits": { 00:17:16.939 "rw_ios_per_sec": 0, 00:17:16.939 "rw_mbytes_per_sec": 0, 00:17:16.939 "r_mbytes_per_sec": 0, 00:17:16.939 "w_mbytes_per_sec": 0 00:17:16.939 }, 00:17:16.939 "claimed": false, 00:17:16.939 "zoned": false, 00:17:16.939 "supported_io_types": { 00:17:16.939 "read": true, 00:17:16.939 "write": true, 00:17:16.939 "unmap": true, 00:17:16.939 "write_zeroes": true, 00:17:16.939 "flush": false, 00:17:16.939 "reset": true, 00:17:16.939 "compare": false, 00:17:16.939 "compare_and_write": false, 00:17:16.939 "abort": false, 00:17:16.939 "nvme_admin": false, 00:17:16.939 "nvme_io": false 00:17:16.939 }, 00:17:16.939 "driver_specific": { 00:17:16.939 "lvol": { 00:17:16.939 "lvol_store_uuid": "95fdb38c-cd12-4fe2-9986-eba8b2821102", 00:17:16.939 "base_bdev": "nvme0n1", 00:17:16.939 "thin_provision": true, 00:17:16.939 "snapshot": false, 00:17:16.939 "clone": false, 00:17:16.939 "esnap_clone": false 00:17:16.939 } 00:17:16.939 } 00:17:16.939 } 00:17:16.939 ]' 00:17:16.939 21:03:37 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:17.198 21:03:38 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:17.198 21:03:38 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:17.198 21:03:38 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:17.198 21:03:38 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:17.198 21:03:38 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:17.198 21:03:38 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:17.198 21:03:38 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 53503f05-8be7-4c4d-94cd-6372d4af2a40 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:17.458 [2024-12-08 21:03:38.328722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.328782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:17.458 [2024-12-08 21:03:38.328804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:17.458 [2024-12-08 21:03:38.328815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.331814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.331866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.458 [2024-12-08 21:03:38.331883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.964 ms 00:17:17.458 [2024-12-08 21:03:38.331894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.332202] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:17.458 [2024-12-08 21:03:38.333149] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:17.458 [2024-12-08 21:03:38.333186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.333198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.458 [2024-12-08 21:03:38.333212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:17:17.458 [2024-12-08 21:03:38.333222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.333407] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4895114b-fe81-4086-8292-4d3881a52f6a 00:17:17.458 [2024-12-08 21:03:38.334313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.334347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:17.458 [2024-12-08 21:03:38.334362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:17.458 [2024-12-08 21:03:38.334374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.338559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.338613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.458 [2024-12-08 21:03:38.338628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.089 ms 00:17:17.458 [2024-12-08 21:03:38.338640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.338780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.338802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.458 [2024-12-08 21:03:38.338814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:17.458 [2024-12-08 21:03:38.338829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.338871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.338886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:17.458 [2024-12-08 21:03:38.338913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:17.458 [2024-12-08 21:03:38.338941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.338984] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:17.458 [2024-12-08 21:03:38.342931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.342977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.458 [2024-12-08 21:03:38.343017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.955 ms 00:17:17.458 [2024-12-08 21:03:38.343029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.343148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.343166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:17.458 [2024-12-08 21:03:38.343180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:17.458 [2024-12-08 21:03:38.343192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.343230] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:17.458 [2024-12-08 21:03:38.343340] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:17.458 [2024-12-08 21:03:38.343361] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:17.458 [2024-12-08 21:03:38.343374] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:17.458 [2024-12-08 21:03:38.343389] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:17.458 [2024-12-08 21:03:38.343403] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:17.458 [2024-12-08 21:03:38.343415] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:17.458 [2024-12-08 21:03:38.343425] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:17.458 [2024-12-08 21:03:38.343438] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:17.458 [2024-12-08 21:03:38.343448] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:17.458 [2024-12-08 21:03:38.343460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.343469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:17.458 [2024-12-08 21:03:38.343481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:17:17.458 [2024-12-08 21:03:38.343492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.343570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.458 [2024-12-08 21:03:38.343584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:17.458 [2024-12-08 21:03:38.343596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:17.458 [2024-12-08 21:03:38.343606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.458 [2024-12-08 21:03:38.343702] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:17.458 [2024-12-08 21:03:38.343716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:17.458 [2024-12-08 21:03:38.343728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.458 [2024-12-08 21:03:38.343739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.458 [2024-12-08 21:03:38.343750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:17.458 [2024-12-08 21:03:38.343759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:17.458 [2024-12-08 21:03:38.343770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:17.458 [2024-12-08 21:03:38.343779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:17.458 [2024-12-08 21:03:38.343790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:17.458 [2024-12-08 21:03:38.343798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.458 [2024-12-08 21:03:38.343809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:17.458 [2024-12-08 21:03:38.343819] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:17.458 [2024-12-08 21:03:38.343829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.458 [2024-12-08 21:03:38.343838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:17.458 [2024-12-08 21:03:38.343851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:17.458 [2024-12-08 21:03:38.343860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.458 [2024-12-08 21:03:38.343873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:17.458 [2024-12-08 21:03:38.343882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:17.458 [2024-12-08 21:03:38.343892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.458 [2024-12-08 21:03:38.343901] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:17.459 [2024-12-08 21:03:38.343911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:17.459 [2024-12-08 21:03:38.343921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:17.459 [2024-12-08 21:03:38.343931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:17.459 [2024-12-08 21:03:38.343941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:17.459 [2024-12-08 21:03:38.343951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:17.459 [2024-12-08 21:03:38.343960] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:17.459 [2024-12-08 21:03:38.343972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:17.459 [2024-12-08 21:03:38.343981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:17.459 [2024-12-08 21:03:38.343991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:17.459 [2024-12-08 21:03:38.344000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:17.459 [2024-12-08 21:03:38.344011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:17.459 [2024-12-08 21:03:38.344020] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:17.459 [2024-12-08 21:03:38.344032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:17.459 [2024-12-08 21:03:38.344041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:17.459 [2024-12-08 21:03:38.344052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:17.459 [2024-12-08 21:03:38.344061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:17.459 [2024-12-08 21:03:38.344100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.459 [2024-12-08 21:03:38.344113] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:17.459 [2024-12-08 21:03:38.344125] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:17.459 [2024-12-08 21:03:38.344134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.459 [2024-12-08 21:03:38.344147] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:17.459 [2024-12-08 21:03:38.344157] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:17.459 [2024-12-08 21:03:38.344168] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.459 [2024-12-08 21:03:38.344181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.459 [2024-12-08 21:03:38.344193] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:17.459 [2024-12-08 21:03:38.344202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:17.459 [2024-12-08 21:03:38.344213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:17.459 [2024-12-08 21:03:38.344223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:17.459 [2024-12-08 21:03:38.344236] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:17.459 [2024-12-08 21:03:38.344245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:17.459 [2024-12-08 21:03:38.344258] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:17.459 [2024-12-08 21:03:38.344271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.459 [2024-12-08 21:03:38.344310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:17.459 [2024-12-08 21:03:38.344322] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:17.459 [2024-12-08 21:03:38.344345] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:17.459 [2024-12-08 21:03:38.344355] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:17.459 [2024-12-08 21:03:38.344367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:17.459 [2024-12-08 21:03:38.344378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:17.459 [2024-12-08 21:03:38.344391] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:17.459 [2024-12-08 21:03:38.344402] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:17.459 [2024-12-08 21:03:38.344414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:17.459 [2024-12-08 21:03:38.344424] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:17.459 [2024-12-08 21:03:38.344437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:17.459 [2024-12-08 21:03:38.344447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:17.459 [2024-12-08 21:03:38.344464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:17.459 [2024-12-08 21:03:38.344475] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:17.459 [2024-12-08 21:03:38.344488] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.459 [2024-12-08 21:03:38.344499] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:17.459 [2024-12-08 21:03:38.344512] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:17.459 [2024-12-08 21:03:38.344523] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:17.459 [2024-12-08 21:03:38.344535] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:17.459 [2024-12-08 21:03:38.344546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.459 [2024-12-08 21:03:38.344574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:17.459 [2024-12-08 21:03:38.344584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:17:17.459 [2024-12-08 21:03:38.344597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.459 [2024-12-08 21:03:38.360925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.459 [2024-12-08 21:03:38.360983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.459 [2024-12-08 21:03:38.360999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.239 ms 00:17:17.459 [2024-12-08 21:03:38.361012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.459 [2024-12-08 21:03:38.361187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.459 [2024-12-08 21:03:38.361216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:17.459 [2024-12-08 21:03:38.361244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:17.459 [2024-12-08 21:03:38.361257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.459 [2024-12-08 21:03:38.395136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.459 [2024-12-08 21:03:38.395208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.459 [2024-12-08 21:03:38.395223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.839 ms 00:17:17.459 [2024-12-08 21:03:38.395236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.459 [2024-12-08 21:03:38.395335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.459 [2024-12-08 21:03:38.395354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.459 [2024-12-08 21:03:38.395366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:17.459 [2024-12-08 21:03:38.395382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.459 [2024-12-08 21:03:38.395725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.459 [2024-12-08 21:03:38.395745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.459 [2024-12-08 21:03:38.395757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:17:17.459 [2024-12-08 21:03:38.395770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.460 [2024-12-08 21:03:38.395898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.460 [2024-12-08 21:03:38.395915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.460 [2024-12-08 21:03:38.395926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:17.460 [2024-12-08 21:03:38.395938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.460 [2024-12-08 21:03:38.420760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.460 [2024-12-08 21:03:38.420817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.460 [2024-12-08 21:03:38.420834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.786 ms 00:17:17.460 [2024-12-08 21:03:38.420850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.460 [2024-12-08 21:03:38.432245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:17.460 [2024-12-08 21:03:38.445049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.460 [2024-12-08 21:03:38.445126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:17.460 [2024-12-08 21:03:38.445147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.055 ms 00:17:17.460 [2024-12-08 21:03:38.445158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.718 [2024-12-08 21:03:38.519436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.718 [2024-12-08 21:03:38.519512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:17.718 [2024-12-08 21:03:38.519533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.152 ms 00:17:17.718 [2024-12-08 21:03:38.519545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.718 [2024-12-08 21:03:38.519647] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:17.718 [2024-12-08 21:03:38.519668] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:20.251 [2024-12-08 21:03:40.889158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.251 [2024-12-08 21:03:40.889238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:20.251 [2024-12-08 21:03:40.889260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2369.526 ms 00:17:20.251 [2024-12-08 21:03:40.889271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.251 [2024-12-08 21:03:40.889527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.251 [2024-12-08 21:03:40.889546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:20.251 [2024-12-08 21:03:40.889560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:17:20.252 [2024-12-08 21:03:40.889571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.252 [2024-12-08 21:03:40.915901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.252 [2024-12-08 21:03:40.915952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:20.252 [2024-12-08 21:03:40.915971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.290 ms 00:17:20.252 [2024-12-08 21:03:40.915981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.252 [2024-12-08 21:03:40.941837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.252 [2024-12-08 21:03:40.941887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:20.252 [2024-12-08 21:03:40.941908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.766 ms 00:17:20.252 [2024-12-08 21:03:40.941927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.252 [2024-12-08 21:03:40.942336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.252 [2024-12-08 21:03:40.942360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:20.252 [2024-12-08 21:03:40.942376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:17:20.252 [2024-12-08 21:03:40.942388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.252 [2024-12-08 21:03:41.010751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.252 [2024-12-08 21:03:41.010804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:20.252 [2024-12-08 21:03:41.010822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.325 ms 00:17:20.252 [2024-12-08 21:03:41.010834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.252 [2024-12-08 21:03:41.037920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.252 [2024-12-08 21:03:41.037970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:20.252 [2024-12-08 21:03:41.037988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.998 ms 00:17:20.252 [2024-12-08 21:03:41.037999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.252 [2024-12-08 21:03:41.041727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.252 [2024-12-08 21:03:41.041775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:20.252 [2024-12-08 21:03:41.041793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.617 ms 00:17:20.252 [2024-12-08 21:03:41.041803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.252 [2024-12-08 21:03:41.068378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.252 [2024-12-08 21:03:41.068429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:20.252 [2024-12-08 21:03:41.068448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.527 ms 00:17:20.252 [2024-12-08 21:03:41.068459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.252 [2024-12-08 21:03:41.068555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.252 [2024-12-08 21:03:41.068573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:20.252 [2024-12-08 21:03:41.068587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:20.252 [2024-12-08 21:03:41.068600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.252 [2024-12-08 21:03:41.068746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.252 [2024-12-08 21:03:41.068779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:20.252 [2024-12-08 21:03:41.068794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:20.252 [2024-12-08 21:03:41.068804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.252 [2024-12-08 21:03:41.069854] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:20.252 [2024-12-08 21:03:41.073580] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2740.803 ms, result 0 00:17:20.252 [2024-12-08 21:03:41.074478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.252 { 00:17:20.252 "name": "ftl0", 00:17:20.252 "uuid": "4895114b-fe81-4086-8292-4d3881a52f6a" 00:17:20.252 } 00:17:20.252 21:03:41 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:20.252 21:03:41 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:17:20.252 21:03:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:20.252 21:03:41 -- common/autotest_common.sh@899 -- # local i 00:17:20.252 21:03:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:20.252 21:03:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:20.252 21:03:41 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:20.510 21:03:41 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:20.769 [ 00:17:20.769 { 00:17:20.769 "name": "ftl0", 00:17:20.769 "aliases": [ 00:17:20.769 "4895114b-fe81-4086-8292-4d3881a52f6a" 00:17:20.769 ], 00:17:20.769 "product_name": "FTL disk", 00:17:20.769 "block_size": 4096, 00:17:20.769 "num_blocks": 23592960, 00:17:20.769 "uuid": "4895114b-fe81-4086-8292-4d3881a52f6a", 00:17:20.769 "assigned_rate_limits": { 00:17:20.769 "rw_ios_per_sec": 0, 00:17:20.769 "rw_mbytes_per_sec": 0, 00:17:20.769 "r_mbytes_per_sec": 0, 00:17:20.769 "w_mbytes_per_sec": 0 00:17:20.769 }, 00:17:20.769 "claimed": false, 00:17:20.769 "zoned": false, 00:17:20.769 "supported_io_types": { 00:17:20.769 "read": true, 00:17:20.769 "write": true, 00:17:20.769 "unmap": true, 00:17:20.769 "write_zeroes": true, 00:17:20.769 "flush": true, 00:17:20.769 "reset": false, 00:17:20.769 "compare": false, 00:17:20.769 "compare_and_write": false, 00:17:20.769 "abort": false, 00:17:20.769 "nvme_admin": false, 00:17:20.769 "nvme_io": false 00:17:20.769 }, 00:17:20.769 "driver_specific": { 00:17:20.769 "ftl": { 00:17:20.769 "base_bdev": "53503f05-8be7-4c4d-94cd-6372d4af2a40", 00:17:20.769 "cache": "nvc0n1p0" 00:17:20.769 } 00:17:20.769 } 00:17:20.769 } 00:17:20.769 ] 00:17:20.769 21:03:41 -- common/autotest_common.sh@905 -- # return 0 00:17:20.769 21:03:41 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:20.769 21:03:41 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:21.029 21:03:41 -- ftl/trim.sh@56 -- # echo ']}' 00:17:21.029 21:03:41 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:21.288 21:03:42 -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:21.288 { 00:17:21.288 "name": "ftl0", 00:17:21.288 "aliases": [ 00:17:21.288 "4895114b-fe81-4086-8292-4d3881a52f6a" 00:17:21.288 ], 00:17:21.288 "product_name": "FTL disk", 00:17:21.288 "block_size": 4096, 00:17:21.288 "num_blocks": 23592960, 00:17:21.288 "uuid": "4895114b-fe81-4086-8292-4d3881a52f6a", 00:17:21.288 "assigned_rate_limits": { 00:17:21.288 "rw_ios_per_sec": 0, 00:17:21.288 "rw_mbytes_per_sec": 0, 00:17:21.288 "r_mbytes_per_sec": 0, 00:17:21.288 "w_mbytes_per_sec": 0 00:17:21.288 }, 00:17:21.288 "claimed": false, 00:17:21.288 "zoned": false, 00:17:21.288 "supported_io_types": { 00:17:21.288 "read": true, 00:17:21.288 "write": true, 00:17:21.288 "unmap": true, 00:17:21.288 "write_zeroes": true, 00:17:21.288 "flush": true, 00:17:21.288 "reset": false, 00:17:21.288 "compare": false, 00:17:21.288 "compare_and_write": false, 00:17:21.288 "abort": false, 00:17:21.288 "nvme_admin": false, 00:17:21.288 "nvme_io": false 00:17:21.288 }, 00:17:21.288 "driver_specific": { 00:17:21.288 "ftl": { 00:17:21.288 "base_bdev": "53503f05-8be7-4c4d-94cd-6372d4af2a40", 00:17:21.288 "cache": "nvc0n1p0" 00:17:21.288 } 00:17:21.288 } 00:17:21.288 } 00:17:21.288 ]' 00:17:21.288 21:03:42 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:21.288 21:03:42 -- ftl/trim.sh@60 -- # nb=23592960 00:17:21.288 21:03:42 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:21.288 [2024-12-08 21:03:42.324492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.288 [2024-12-08 21:03:42.324558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:21.288 [2024-12-08 21:03:42.324577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:21.288 [2024-12-08 21:03:42.324591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.288 [2024-12-08 21:03:42.324646] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:21.288 [2024-12-08 21:03:42.327830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.288 [2024-12-08 21:03:42.327874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:21.288 [2024-12-08 21:03:42.327889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:17:21.288 [2024-12-08 21:03:42.327900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.288 [2024-12-08 21:03:42.328601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.288 [2024-12-08 21:03:42.328643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:21.288 [2024-12-08 21:03:42.328677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:17:21.288 [2024-12-08 21:03:42.328703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.550 [2024-12-08 21:03:42.332456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.550 [2024-12-08 21:03:42.332483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:21.550 [2024-12-08 21:03:42.332502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.715 ms 00:17:21.550 [2024-12-08 21:03:42.332517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.550 [2024-12-08 21:03:42.339445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.550 [2024-12-08 21:03:42.339478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:21.550 [2024-12-08 21:03:42.339495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.875 ms 00:17:21.550 [2024-12-08 21:03:42.339506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.550 [2024-12-08 21:03:42.369214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.550 [2024-12-08 21:03:42.369251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:21.551 [2024-12-08 21:03:42.369270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.593 ms 00:17:21.551 [2024-12-08 21:03:42.369282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.551 [2024-12-08 21:03:42.385656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.551 [2024-12-08 21:03:42.385706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:21.551 [2024-12-08 21:03:42.385724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.283 ms 00:17:21.551 [2024-12-08 21:03:42.385735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.551 [2024-12-08 21:03:42.385949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.551 [2024-12-08 21:03:42.385970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:21.551 [2024-12-08 21:03:42.385985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:17:21.551 [2024-12-08 21:03:42.385996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.551 [2024-12-08 21:03:42.412432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.551 [2024-12-08 21:03:42.412482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:21.551 [2024-12-08 21:03:42.412500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.402 ms 00:17:21.551 [2024-12-08 21:03:42.412511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.551 [2024-12-08 21:03:42.438579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.551 [2024-12-08 21:03:42.438627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:21.551 [2024-12-08 21:03:42.438644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.981 ms 00:17:21.551 [2024-12-08 21:03:42.438654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.551 [2024-12-08 21:03:42.467567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.551 [2024-12-08 21:03:42.467616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:21.551 [2024-12-08 21:03:42.467633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.829 ms 00:17:21.551 [2024-12-08 21:03:42.467643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.551 [2024-12-08 21:03:42.495068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.551 [2024-12-08 21:03:42.495125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:21.551 [2024-12-08 21:03:42.495145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.285 ms 00:17:21.551 [2024-12-08 21:03:42.495156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.551 [2024-12-08 21:03:42.495242] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:21.551 [2024-12-08 21:03:42.495266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.495989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.496000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.496012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.496022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.496034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:21.551 [2024-12-08 21:03:42.496045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:21.552 [2024-12-08 21:03:42.496523] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:21.552 [2024-12-08 21:03:42.496537] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4895114b-fe81-4086-8292-4d3881a52f6a 00:17:21.552 [2024-12-08 21:03:42.496549] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:21.552 [2024-12-08 21:03:42.496562] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:21.552 [2024-12-08 21:03:42.496573] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:21.552 [2024-12-08 21:03:42.496586] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:21.552 [2024-12-08 21:03:42.496627] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:21.552 [2024-12-08 21:03:42.496653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:21.552 [2024-12-08 21:03:42.496665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:21.552 [2024-12-08 21:03:42.496677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:21.552 [2024-12-08 21:03:42.496686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:21.552 [2024-12-08 21:03:42.496698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.552 [2024-12-08 21:03:42.496708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:21.552 [2024-12-08 21:03:42.496721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:17:21.552 [2024-12-08 21:03:42.496731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.552 [2024-12-08 21:03:42.511385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.552 [2024-12-08 21:03:42.511431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:21.552 [2024-12-08 21:03:42.511449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.613 ms 00:17:21.552 [2024-12-08 21:03:42.511460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.552 [2024-12-08 21:03:42.511730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.552 [2024-12-08 21:03:42.511746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:21.552 [2024-12-08 21:03:42.511760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:21.552 [2024-12-08 21:03:42.511770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.552 [2024-12-08 21:03:42.562848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.552 [2024-12-08 21:03:42.562903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.552 [2024-12-08 21:03:42.562923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.552 [2024-12-08 21:03:42.562936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.552 [2024-12-08 21:03:42.563052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.552 [2024-12-08 21:03:42.563069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.552 [2024-12-08 21:03:42.563097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.552 [2024-12-08 21:03:42.563138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.552 [2024-12-08 21:03:42.563218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.552 [2024-12-08 21:03:42.563237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.552 [2024-12-08 21:03:42.563250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.552 [2024-12-08 21:03:42.563261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.552 [2024-12-08 21:03:42.563302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.552 [2024-12-08 21:03:42.563315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.552 [2024-12-08 21:03:42.563328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.552 [2024-12-08 21:03:42.563338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.812 [2024-12-08 21:03:42.661001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.812 [2024-12-08 21:03:42.661066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.812 [2024-12-08 21:03:42.661097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.812 [2024-12-08 21:03:42.661113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.812 [2024-12-08 21:03:42.693255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.812 [2024-12-08 21:03:42.693303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.812 [2024-12-08 21:03:42.693320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.812 [2024-12-08 21:03:42.693331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.812 [2024-12-08 21:03:42.693412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.812 [2024-12-08 21:03:42.693428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.812 [2024-12-08 21:03:42.693441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.812 [2024-12-08 21:03:42.693451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.812 [2024-12-08 21:03:42.693509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.812 [2024-12-08 21:03:42.693524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.812 [2024-12-08 21:03:42.693535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.812 [2024-12-08 21:03:42.693565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.812 [2024-12-08 21:03:42.693693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.812 [2024-12-08 21:03:42.693711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.812 [2024-12-08 21:03:42.693726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.812 [2024-12-08 21:03:42.693736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.812 [2024-12-08 21:03:42.693801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.812 [2024-12-08 21:03:42.693817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:21.812 [2024-12-08 21:03:42.693832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.812 [2024-12-08 21:03:42.693842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.812 [2024-12-08 21:03:42.693900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.812 [2024-12-08 21:03:42.693914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.812 [2024-12-08 21:03:42.693926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.812 [2024-12-08 21:03:42.693936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.812 [2024-12-08 21:03:42.693999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.812 [2024-12-08 21:03:42.694016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.812 [2024-12-08 21:03:42.694028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.812 [2024-12-08 21:03:42.694037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.812 [2024-12-08 21:03:42.694286] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.769 ms, result 0 00:17:21.812 true 00:17:21.812 21:03:42 -- ftl/trim.sh@63 -- # killprocess 72551 00:17:21.812 21:03:42 -- common/autotest_common.sh@936 -- # '[' -z 72551 ']' 00:17:21.812 21:03:42 -- common/autotest_common.sh@940 -- # kill -0 72551 00:17:21.812 21:03:42 -- common/autotest_common.sh@941 -- # uname 00:17:21.812 21:03:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.812 21:03:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72551 00:17:21.812 killing process with pid 72551 00:17:21.812 21:03:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:21.812 21:03:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:21.812 21:03:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72551' 00:17:21.812 21:03:42 -- common/autotest_common.sh@955 -- # kill 72551 00:17:21.812 21:03:42 -- common/autotest_common.sh@960 -- # wait 72551 00:17:26.005 21:03:46 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:26.941 65536+0 records in 00:17:26.941 65536+0 records out 00:17:26.941 268435456 bytes (268 MB, 256 MiB) copied, 0.981706 s, 273 MB/s 00:17:26.941 21:03:47 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:26.941 [2024-12-08 21:03:47.948936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:26.941 [2024-12-08 21:03:47.949126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72749 ] 00:17:27.201 [2024-12-08 21:03:48.122443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.460 [2024-12-08 21:03:48.321933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.720 [2024-12-08 21:03:48.574974] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.720 [2024-12-08 21:03:48.575064] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.720 [2024-12-08 21:03:48.724460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.720 [2024-12-08 21:03:48.724520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:27.720 [2024-12-08 21:03:48.724553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:27.720 [2024-12-08 21:03:48.724578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.720 [2024-12-08 21:03:48.727190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.720 [2024-12-08 21:03:48.727228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.720 [2024-12-08 21:03:48.727258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:17:27.720 [2024-12-08 21:03:48.727268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.720 [2024-12-08 21:03:48.727394] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:27.721 [2024-12-08 21:03:48.728342] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:27.721 [2024-12-08 21:03:48.728405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.721 [2024-12-08 21:03:48.728418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.721 [2024-12-08 21:03:48.728429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:17:27.721 [2024-12-08 21:03:48.728440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.721 [2024-12-08 21:03:48.729580] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:27.721 [2024-12-08 21:03:48.742343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.721 [2024-12-08 21:03:48.742378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:27.721 [2024-12-08 21:03:48.742408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.764 ms 00:17:27.721 [2024-12-08 21:03:48.742418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.721 [2024-12-08 21:03:48.742520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.721 [2024-12-08 21:03:48.742539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:27.721 [2024-12-08 21:03:48.742550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:27.721 [2024-12-08 21:03:48.742560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.721 [2024-12-08 21:03:48.746564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.721 [2024-12-08 21:03:48.746597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.721 [2024-12-08 21:03:48.746625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.956 ms 00:17:27.721 [2024-12-08 21:03:48.746639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.721 [2024-12-08 21:03:48.746756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.721 [2024-12-08 21:03:48.746774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.721 [2024-12-08 21:03:48.746785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:27.721 [2024-12-08 21:03:48.746794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.721 [2024-12-08 21:03:48.746830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.721 [2024-12-08 21:03:48.746843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:27.721 [2024-12-08 21:03:48.746883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:27.721 [2024-12-08 21:03:48.746892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.721 [2024-12-08 21:03:48.746930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:27.721 [2024-12-08 21:03:48.750641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.721 [2024-12-08 21:03:48.750689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.721 [2024-12-08 21:03:48.750718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.730 ms 00:17:27.721 [2024-12-08 21:03:48.750732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.721 [2024-12-08 21:03:48.750791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.721 [2024-12-08 21:03:48.750806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:27.721 [2024-12-08 21:03:48.750817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:27.721 [2024-12-08 21:03:48.750827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.721 [2024-12-08 21:03:48.750849] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:27.721 [2024-12-08 21:03:48.750873] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:27.721 [2024-12-08 21:03:48.750909] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:27.721 [2024-12-08 21:03:48.750959] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:27.721 [2024-12-08 21:03:48.751030] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:27.721 [2024-12-08 21:03:48.751043] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:27.721 [2024-12-08 21:03:48.751056] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:27.721 [2024-12-08 21:03:48.751069] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:27.721 [2024-12-08 21:03:48.751080] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:27.721 [2024-12-08 21:03:48.751091] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:27.721 [2024-12-08 21:03:48.751100] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:27.721 [2024-12-08 21:03:48.751109] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:27.721 [2024-12-08 21:03:48.751148] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:27.721 [2024-12-08 21:03:48.751160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.721 [2024-12-08 21:03:48.751171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:27.721 [2024-12-08 21:03:48.751181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:17:27.721 [2024-12-08 21:03:48.751191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.721 [2024-12-08 21:03:48.751277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.721 [2024-12-08 21:03:48.751291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:27.721 [2024-12-08 21:03:48.751301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:27.721 [2024-12-08 21:03:48.751310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.721 [2024-12-08 21:03:48.751388] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:27.721 [2024-12-08 21:03:48.751415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:27.721 [2024-12-08 21:03:48.751427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.721 [2024-12-08 21:03:48.751437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:27.721 [2024-12-08 21:03:48.751456] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:27.721 [2024-12-08 21:03:48.751475] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:27.721 [2024-12-08 21:03:48.751484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.721 [2024-12-08 21:03:48.751501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:27.721 [2024-12-08 21:03:48.751509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:27.721 [2024-12-08 21:03:48.751518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.721 [2024-12-08 21:03:48.751526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:27.721 [2024-12-08 21:03:48.751547] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:27.721 [2024-12-08 21:03:48.751556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:27.721 [2024-12-08 21:03:48.751573] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:27.721 [2024-12-08 21:03:48.751582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751590] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:27.721 [2024-12-08 21:03:48.751599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:27.721 [2024-12-08 21:03:48.751608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:27.721 [2024-12-08 21:03:48.751617] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:27.721 [2024-12-08 21:03:48.751625] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.721 [2024-12-08 21:03:48.751642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:27.721 [2024-12-08 21:03:48.751651] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.721 [2024-12-08 21:03:48.751667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:27.721 [2024-12-08 21:03:48.751676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.721 [2024-12-08 21:03:48.751693] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:27.721 [2024-12-08 21:03:48.751701] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.721 [2024-12-08 21:03:48.751718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:27.721 [2024-12-08 21:03:48.751726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.721 [2024-12-08 21:03:48.751743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:27.721 [2024-12-08 21:03:48.751754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:27.721 [2024-12-08 21:03:48.751763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.721 [2024-12-08 21:03:48.751772] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:27.721 [2024-12-08 21:03:48.751781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:27.721 [2024-12-08 21:03:48.751790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.721 [2024-12-08 21:03:48.751804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.721 [2024-12-08 21:03:48.751814] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:27.721 [2024-12-08 21:03:48.751823] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:27.721 [2024-12-08 21:03:48.751832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:27.721 [2024-12-08 21:03:48.751841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:27.722 [2024-12-08 21:03:48.751849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:27.722 [2024-12-08 21:03:48.751858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:27.722 [2024-12-08 21:03:48.751868] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:27.722 [2024-12-08 21:03:48.751880] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.722 [2024-12-08 21:03:48.751890] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:27.722 [2024-12-08 21:03:48.751899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:27.722 [2024-12-08 21:03:48.751909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:27.722 [2024-12-08 21:03:48.751918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:27.722 [2024-12-08 21:03:48.751927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:27.722 [2024-12-08 21:03:48.751936] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:27.722 [2024-12-08 21:03:48.751945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:27.722 [2024-12-08 21:03:48.751954] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:27.722 [2024-12-08 21:03:48.751964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:27.722 [2024-12-08 21:03:48.751973] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:27.722 [2024-12-08 21:03:48.751983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:27.722 [2024-12-08 21:03:48.751992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:27.722 [2024-12-08 21:03:48.752002] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:27.722 [2024-12-08 21:03:48.752011] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:27.722 [2024-12-08 21:03:48.752027] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.722 [2024-12-08 21:03:48.752037] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:27.722 [2024-12-08 21:03:48.752047] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:27.722 [2024-12-08 21:03:48.752056] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:27.722 [2024-12-08 21:03:48.752066] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:27.722 [2024-12-08 21:03:48.752111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.722 [2024-12-08 21:03:48.752124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:27.722 [2024-12-08 21:03:48.752134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:17:27.722 [2024-12-08 21:03:48.752143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.768405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.768461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.983 [2024-12-08 21:03:48.768494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.207 ms 00:17:27.983 [2024-12-08 21:03:48.768504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.768682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.768699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:27.983 [2024-12-08 21:03:48.768725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:27.983 [2024-12-08 21:03:48.768748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.809010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.809052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.983 [2024-12-08 21:03:48.809082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.230 ms 00:17:27.983 [2024-12-08 21:03:48.809103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.809191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.809208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.983 [2024-12-08 21:03:48.809225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.983 [2024-12-08 21:03:48.809234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.809575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.809601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.983 [2024-12-08 21:03:48.809614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:17:27.983 [2024-12-08 21:03:48.809624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.809754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.809778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.983 [2024-12-08 21:03:48.809789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:27.983 [2024-12-08 21:03:48.809799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.823808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.823843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.983 [2024-12-08 21:03:48.823874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.978 ms 00:17:27.983 [2024-12-08 21:03:48.823887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.836979] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:27.983 [2024-12-08 21:03:48.837019] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:27.983 [2024-12-08 21:03:48.837050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.837059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:27.983 [2024-12-08 21:03:48.837069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.052 ms 00:17:27.983 [2024-12-08 21:03:48.837079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.859972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.860008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:27.983 [2024-12-08 21:03:48.860044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.798 ms 00:17:27.983 [2024-12-08 21:03:48.860054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.872569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.872621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:27.983 [2024-12-08 21:03:48.872651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.425 ms 00:17:27.983 [2024-12-08 21:03:48.872685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.885201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.885250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:27.983 [2024-12-08 21:03:48.885279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.437 ms 00:17:27.983 [2024-12-08 21:03:48.885289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.885724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.885754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:27.983 [2024-12-08 21:03:48.885767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:17:27.983 [2024-12-08 21:03:48.885777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.945556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.945613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:27.983 [2024-12-08 21:03:48.945645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.748 ms 00:17:27.983 [2024-12-08 21:03:48.945655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.955754] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:27.983 [2024-12-08 21:03:48.967010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.967058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:27.983 [2024-12-08 21:03:48.967097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.237 ms 00:17:27.983 [2024-12-08 21:03:48.967110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.967221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.967243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:27.983 [2024-12-08 21:03:48.967254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:27.983 [2024-12-08 21:03:48.967268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.967326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.983 [2024-12-08 21:03:48.967361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:27.983 [2024-12-08 21:03:48.967385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:27.983 [2024-12-08 21:03:48.967395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.983 [2024-12-08 21:03:48.969184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.984 [2024-12-08 21:03:48.969233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:27.984 [2024-12-08 21:03:48.969261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.761 ms 00:17:27.984 [2024-12-08 21:03:48.969270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.984 [2024-12-08 21:03:48.969306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.984 [2024-12-08 21:03:48.969318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:27.984 [2024-12-08 21:03:48.969334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:27.984 [2024-12-08 21:03:48.969343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.984 [2024-12-08 21:03:48.969380] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:27.984 [2024-12-08 21:03:48.969393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.984 [2024-12-08 21:03:48.969402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:27.984 [2024-12-08 21:03:48.969411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:27.984 [2024-12-08 21:03:48.969420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.984 [2024-12-08 21:03:48.994524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.984 [2024-12-08 21:03:48.994582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:27.984 [2024-12-08 21:03:48.994613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.078 ms 00:17:27.984 [2024-12-08 21:03:48.994623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.984 [2024-12-08 21:03:48.994719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.984 [2024-12-08 21:03:48.994737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:27.984 [2024-12-08 21:03:48.994748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:27.984 [2024-12-08 21:03:48.994757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.984 [2024-12-08 21:03:48.996093] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.984 [2024-12-08 21:03:48.999719] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 271.247 ms, result 0 00:17:27.984 [2024-12-08 21:03:49.000639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:27.984 [2024-12-08 21:03:49.015238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.400  [2024-12-08T21:03:51.379Z] Copying: 22/256 [MB] (22 MBps) [2024-12-08T21:03:52.316Z] Copying: 45/256 [MB] (22 MBps) [2024-12-08T21:03:53.251Z] Copying: 68/256 [MB] (22 MBps) [2024-12-08T21:03:54.186Z] Copying: 90/256 [MB] (22 MBps) [2024-12-08T21:03:55.121Z] Copying: 114/256 [MB] (23 MBps) [2024-12-08T21:03:56.053Z] Copying: 137/256 [MB] (23 MBps) [2024-12-08T21:03:57.433Z] Copying: 160/256 [MB] (23 MBps) [2024-12-08T21:03:58.371Z] Copying: 183/256 [MB] (23 MBps) [2024-12-08T21:03:59.325Z] Copying: 205/256 [MB] (22 MBps) [2024-12-08T21:04:00.270Z] Copying: 229/256 [MB] (23 MBps) [2024-12-08T21:04:00.270Z] Copying: 252/256 [MB] (23 MBps) [2024-12-08T21:04:00.270Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-08 21:04:00.175310] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.227 [2024-12-08 21:04:00.185763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.227 [2024-12-08 21:04:00.185817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:39.227 [2024-12-08 21:04:00.185860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:39.228 [2024-12-08 21:04:00.185872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.228 [2024-12-08 21:04:00.185902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:39.228 [2024-12-08 21:04:00.188952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.228 [2024-12-08 21:04:00.188994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:39.228 [2024-12-08 21:04:00.189023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.031 ms 00:17:39.228 [2024-12-08 21:04:00.189032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.228 [2024-12-08 21:04:00.190870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.228 [2024-12-08 21:04:00.190920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:39.228 [2024-12-08 21:04:00.190950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.812 ms 00:17:39.228 [2024-12-08 21:04:00.190960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.228 [2024-12-08 21:04:00.197453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.228 [2024-12-08 21:04:00.197504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:39.228 [2024-12-08 21:04:00.197534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.465 ms 00:17:39.228 [2024-12-08 21:04:00.197544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.228 [2024-12-08 21:04:00.204131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.228 [2024-12-08 21:04:00.204177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:39.228 [2024-12-08 21:04:00.204206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.530 ms 00:17:39.228 [2024-12-08 21:04:00.204217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.228 [2024-12-08 21:04:00.230014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.228 [2024-12-08 21:04:00.230065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:39.228 [2024-12-08 21:04:00.230119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.691 ms 00:17:39.228 [2024-12-08 21:04:00.230130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.228 [2024-12-08 21:04:00.245258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.228 [2024-12-08 21:04:00.245310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:39.228 [2024-12-08 21:04:00.245340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.057 ms 00:17:39.228 [2024-12-08 21:04:00.245351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.228 [2024-12-08 21:04:00.245500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.228 [2024-12-08 21:04:00.245534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:39.228 [2024-12-08 21:04:00.245545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:39.228 [2024-12-08 21:04:00.245556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.488 [2024-12-08 21:04:00.272157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.488 [2024-12-08 21:04:00.272208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:39.488 [2024-12-08 21:04:00.272239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.581 ms 00:17:39.488 [2024-12-08 21:04:00.272249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.488 [2024-12-08 21:04:00.298401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.488 [2024-12-08 21:04:00.298437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:39.488 [2024-12-08 21:04:00.298452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.005 ms 00:17:39.488 [2024-12-08 21:04:00.298475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.488 [2024-12-08 21:04:00.325610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.488 [2024-12-08 21:04:00.325644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:39.488 [2024-12-08 21:04:00.325674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.064 ms 00:17:39.488 [2024-12-08 21:04:00.325683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.488 [2024-12-08 21:04:00.350589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.488 [2024-12-08 21:04:00.350621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:39.488 [2024-12-08 21:04:00.350650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.806 ms 00:17:39.488 [2024-12-08 21:04:00.350659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.488 [2024-12-08 21:04:00.350714] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:39.488 [2024-12-08 21:04:00.350735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.350994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.351004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.351014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.351023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.351032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.351042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.351052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.351062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.351071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.351080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:39.488 [2024-12-08 21:04:00.351090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:39.489 [2024-12-08 21:04:00.351762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:39.489 [2024-12-08 21:04:00.351772] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4895114b-fe81-4086-8292-4d3881a52f6a 00:17:39.489 [2024-12-08 21:04:00.351781] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:39.489 [2024-12-08 21:04:00.351790] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:39.489 [2024-12-08 21:04:00.351799] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:39.489 [2024-12-08 21:04:00.351808] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:39.489 [2024-12-08 21:04:00.351817] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:39.489 [2024-12-08 21:04:00.351827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:39.489 [2024-12-08 21:04:00.351856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:39.489 [2024-12-08 21:04:00.351864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:39.489 [2024-12-08 21:04:00.351872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:39.489 [2024-12-08 21:04:00.351882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.489 [2024-12-08 21:04:00.351891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:39.489 [2024-12-08 21:04:00.351902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:17:39.489 [2024-12-08 21:04:00.351912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.489 [2024-12-08 21:04:00.365531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.489 [2024-12-08 21:04:00.365579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:39.489 [2024-12-08 21:04:00.365593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.580 ms 00:17:39.489 [2024-12-08 21:04:00.365609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.489 [2024-12-08 21:04:00.365862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.489 [2024-12-08 21:04:00.365888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:39.489 [2024-12-08 21:04:00.365900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:17:39.489 [2024-12-08 21:04:00.365910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.489 [2024-12-08 21:04:00.403842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.489 [2024-12-08 21:04:00.403879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.489 [2024-12-08 21:04:00.403908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.489 [2024-12-08 21:04:00.403923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.489 [2024-12-08 21:04:00.404006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.489 [2024-12-08 21:04:00.404022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.489 [2024-12-08 21:04:00.404032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.404041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.404108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.490 [2024-12-08 21:04:00.404156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.490 [2024-12-08 21:04:00.404167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.404176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.404204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.490 [2024-12-08 21:04:00.404216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.490 [2024-12-08 21:04:00.404226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.404235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.479472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.490 [2024-12-08 21:04:00.479525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.490 [2024-12-08 21:04:00.479556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.479571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.510686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.490 [2024-12-08 21:04:00.510737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.490 [2024-12-08 21:04:00.510767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.510776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.510844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.490 [2024-12-08 21:04:00.510860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.490 [2024-12-08 21:04:00.510870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.510879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.510910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.490 [2024-12-08 21:04:00.510928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.490 [2024-12-08 21:04:00.510938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.510947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.511087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.490 [2024-12-08 21:04:00.511105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.490 [2024-12-08 21:04:00.511115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.511146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.511194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.490 [2024-12-08 21:04:00.511214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.490 [2024-12-08 21:04:00.511225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.511235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.511277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.490 [2024-12-08 21:04:00.511312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.490 [2024-12-08 21:04:00.511323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.511333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.511384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.490 [2024-12-08 21:04:00.511403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.490 [2024-12-08 21:04:00.511417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.490 [2024-12-08 21:04:00.511427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.490 [2024-12-08 21:04:00.511579] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 325.865 ms, result 0 00:17:40.429 00:17:40.429 00:17:40.429 21:04:01 -- ftl/trim.sh@72 -- # svcpid=72896 00:17:40.429 21:04:01 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:40.429 21:04:01 -- ftl/trim.sh@73 -- # waitforlisten 72896 00:17:40.429 21:04:01 -- common/autotest_common.sh@829 -- # '[' -z 72896 ']' 00:17:40.429 21:04:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.429 21:04:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.429 21:04:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.429 21:04:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.429 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:17:40.705 [2024-12-08 21:04:01.555480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:40.705 [2024-12-08 21:04:01.555646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72896 ] 00:17:40.705 [2024-12-08 21:04:01.722725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.964 [2024-12-08 21:04:01.863399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:40.964 [2024-12-08 21:04:01.863598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.531 21:04:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.531 21:04:02 -- common/autotest_common.sh@862 -- # return 0 00:17:41.531 21:04:02 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:41.789 [2024-12-08 21:04:02.663909] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.789 [2024-12-08 21:04:02.663969] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.049 [2024-12-08 21:04:02.834627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.834674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:42.049 [2024-12-08 21:04:02.834716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:42.049 [2024-12-08 21:04:02.834730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.838092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.838128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.049 [2024-12-08 21:04:02.838155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:17:42.049 [2024-12-08 21:04:02.838167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.838296] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:42.049 [2024-12-08 21:04:02.839116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:42.049 [2024-12-08 21:04:02.839154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.839168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.049 [2024-12-08 21:04:02.839185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:17:42.049 [2024-12-08 21:04:02.839197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.840427] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:42.049 [2024-12-08 21:04:02.853677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.853745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:42.049 [2024-12-08 21:04:02.853764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.261 ms 00:17:42.049 [2024-12-08 21:04:02.853781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.853883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.853911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:42.049 [2024-12-08 21:04:02.853924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:42.049 [2024-12-08 21:04:02.853940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.858109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.858153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.049 [2024-12-08 21:04:02.858170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.087 ms 00:17:42.049 [2024-12-08 21:04:02.858186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.858304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.858328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.049 [2024-12-08 21:04:02.858341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:42.049 [2024-12-08 21:04:02.858356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.858393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.858413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:42.049 [2024-12-08 21:04:02.858425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:42.049 [2024-12-08 21:04:02.858441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.858483] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:42.049 [2024-12-08 21:04:02.862110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.862284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.049 [2024-12-08 21:04:02.862403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.639 ms 00:17:42.049 [2024-12-08 21:04:02.862482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.862589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.862678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:42.049 [2024-12-08 21:04:02.862746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:42.049 [2024-12-08 21:04:02.862806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.862868] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:42.049 [2024-12-08 21:04:02.863056] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:42.049 [2024-12-08 21:04:02.863172] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:42.049 [2024-12-08 21:04:02.863364] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:42.049 [2024-12-08 21:04:02.863485] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:42.049 [2024-12-08 21:04:02.863633] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:42.049 [2024-12-08 21:04:02.863673] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:42.049 [2024-12-08 21:04:02.863690] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:42.049 [2024-12-08 21:04:02.863709] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:42.049 [2024-12-08 21:04:02.863721] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:42.049 [2024-12-08 21:04:02.863737] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:42.049 [2024-12-08 21:04:02.863748] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:42.049 [2024-12-08 21:04:02.863767] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:42.049 [2024-12-08 21:04:02.863781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.863797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:42.049 [2024-12-08 21:04:02.863810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:17:42.049 [2024-12-08 21:04:02.863825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.863932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.049 [2024-12-08 21:04:02.863955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:42.049 [2024-12-08 21:04:02.863967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:42.049 [2024-12-08 21:04:02.863982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.049 [2024-12-08 21:04:02.864065] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:42.049 [2024-12-08 21:04:02.864124] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:42.049 [2024-12-08 21:04:02.864139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.049 [2024-12-08 21:04:02.864155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.049 [2024-12-08 21:04:02.864168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:42.049 [2024-12-08 21:04:02.864183] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:42.049 [2024-12-08 21:04:02.864210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:42.049 [2024-12-08 21:04:02.864233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:42.049 [2024-12-08 21:04:02.864246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:42.049 [2024-12-08 21:04:02.864289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.049 [2024-12-08 21:04:02.864302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:42.049 [2024-12-08 21:04:02.864319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:42.049 [2024-12-08 21:04:02.864331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.049 [2024-12-08 21:04:02.864347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:42.049 [2024-12-08 21:04:02.864359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:42.049 [2024-12-08 21:04:02.864374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.049 [2024-12-08 21:04:02.864386] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:42.049 [2024-12-08 21:04:02.864402] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:42.049 [2024-12-08 21:04:02.864414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.049 [2024-12-08 21:04:02.864430] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:42.050 [2024-12-08 21:04:02.864442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:42.050 [2024-12-08 21:04:02.864458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:42.050 [2024-12-08 21:04:02.864485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:42.050 [2024-12-08 21:04:02.864519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:42.050 [2024-12-08 21:04:02.864531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:42.050 [2024-12-08 21:04:02.864563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:42.050 [2024-12-08 21:04:02.864576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:42.050 [2024-12-08 21:04:02.864591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:42.050 [2024-12-08 21:04:02.864604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:42.050 [2024-12-08 21:04:02.864619] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:42.050 [2024-12-08 21:04:02.864645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:42.050 [2024-12-08 21:04:02.864662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:42.050 [2024-12-08 21:04:02.864674] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:42.050 [2024-12-08 21:04:02.864689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:42.050 [2024-12-08 21:04:02.864701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:42.050 [2024-12-08 21:04:02.864716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:42.050 [2024-12-08 21:04:02.864727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.050 [2024-12-08 21:04:02.864742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:42.050 [2024-12-08 21:04:02.864754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:42.050 [2024-12-08 21:04:02.864774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.050 [2024-12-08 21:04:02.864785] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:42.050 [2024-12-08 21:04:02.864810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:42.050 [2024-12-08 21:04:02.864822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.050 [2024-12-08 21:04:02.864837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.050 [2024-12-08 21:04:02.864850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:42.050 [2024-12-08 21:04:02.864866] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:42.050 [2024-12-08 21:04:02.864878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:42.050 [2024-12-08 21:04:02.864893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:42.050 [2024-12-08 21:04:02.864905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:42.050 [2024-12-08 21:04:02.864921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:42.050 [2024-12-08 21:04:02.864933] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:42.050 [2024-12-08 21:04:02.864952] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.050 [2024-12-08 21:04:02.864966] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:42.050 [2024-12-08 21:04:02.864981] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:42.050 [2024-12-08 21:04:02.864994] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:42.050 [2024-12-08 21:04:02.865014] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:42.050 [2024-12-08 21:04:02.865026] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:42.050 [2024-12-08 21:04:02.865042] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:42.050 [2024-12-08 21:04:02.865054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:42.050 [2024-12-08 21:04:02.865069] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:42.050 [2024-12-08 21:04:02.865095] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:42.050 [2024-12-08 21:04:02.865112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:42.050 [2024-12-08 21:04:02.865124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:42.050 [2024-12-08 21:04:02.865140] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:42.050 [2024-12-08 21:04:02.865154] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:42.050 [2024-12-08 21:04:02.865169] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:42.050 [2024-12-08 21:04:02.865183] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.050 [2024-12-08 21:04:02.865200] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:42.050 [2024-12-08 21:04:02.865212] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:42.050 [2024-12-08 21:04:02.865228] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:42.050 [2024-12-08 21:04:02.865240] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:42.050 [2024-12-08 21:04:02.865277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.865290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:42.050 [2024-12-08 21:04:02.865307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:17:42.050 [2024-12-08 21:04:02.865319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.881333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.881370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.050 [2024-12-08 21:04:02.881393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.931 ms 00:17:42.050 [2024-12-08 21:04:02.881406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.881521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.881538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:42.050 [2024-12-08 21:04:02.881551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:42.050 [2024-12-08 21:04:02.881562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.913744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.913786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.050 [2024-12-08 21:04:02.913809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.154 ms 00:17:42.050 [2024-12-08 21:04:02.913822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.913911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.913932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.050 [2024-12-08 21:04:02.913948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:42.050 [2024-12-08 21:04:02.913959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.914300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.914318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.050 [2024-12-08 21:04:02.914339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:17:42.050 [2024-12-08 21:04:02.914351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.914534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.914551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.050 [2024-12-08 21:04:02.914575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:17:42.050 [2024-12-08 21:04:02.914588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.929824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.929860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.050 [2024-12-08 21:04:02.929890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.207 ms 00:17:42.050 [2024-12-08 21:04:02.929902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.943022] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:42.050 [2024-12-08 21:04:02.943059] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:42.050 [2024-12-08 21:04:02.943110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.943126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:42.050 [2024-12-08 21:04:02.943143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.072 ms 00:17:42.050 [2024-12-08 21:04:02.943154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.966186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.966223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:42.050 [2024-12-08 21:04:02.966246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.928 ms 00:17:42.050 [2024-12-08 21:04:02.966258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.978683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.978840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:42.050 [2024-12-08 21:04:02.978875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.342 ms 00:17:42.050 [2024-12-08 21:04:02.978889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.991426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.991461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:42.050 [2024-12-08 21:04:02.991485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.435 ms 00:17:42.050 [2024-12-08 21:04:02.991496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:02.991882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:02.991901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:42.050 [2024-12-08 21:04:02.991926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:17:42.050 [2024-12-08 21:04:02.991937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:03.056106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:03.056344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:42.050 [2024-12-08 21:04:03.056476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.122 ms 00:17:42.050 [2024-12-08 21:04:03.056525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:03.066491] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:42.050 [2024-12-08 21:04:03.077585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:03.077836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:42.050 [2024-12-08 21:04:03.077866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.867 ms 00:17:42.050 [2024-12-08 21:04:03.077886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:03.078002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:03.078031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:42.050 [2024-12-08 21:04:03.078045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:42.050 [2024-12-08 21:04:03.078069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:03.078186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:03.078225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:42.050 [2024-12-08 21:04:03.078240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:42.050 [2024-12-08 21:04:03.078257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:03.080914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:03.080973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:42.050 [2024-12-08 21:04:03.080989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.629 ms 00:17:42.050 [2024-12-08 21:04:03.081005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:03.081044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:03.081081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:42.050 [2024-12-08 21:04:03.081097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:42.050 [2024-12-08 21:04:03.081113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.050 [2024-12-08 21:04:03.081163] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:42.050 [2024-12-08 21:04:03.081188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.050 [2024-12-08 21:04:03.081201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:42.050 [2024-12-08 21:04:03.081216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:42.050 [2024-12-08 21:04:03.081227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.309 [2024-12-08 21:04:03.106861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.309 [2024-12-08 21:04:03.106898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:42.309 [2024-12-08 21:04:03.106921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.596 ms 00:17:42.309 [2024-12-08 21:04:03.106933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.309 [2024-12-08 21:04:03.107031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.309 [2024-12-08 21:04:03.107049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:42.310 [2024-12-08 21:04:03.107066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:42.310 [2024-12-08 21:04:03.107117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.310 [2024-12-08 21:04:03.108243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:42.310 [2024-12-08 21:04:03.111815] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.170 ms, result 0 00:17:42.310 [2024-12-08 21:04:03.113228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:42.310 Some configs were skipped because the RPC state that can call them passed over. 00:17:42.310 21:04:03 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:42.569 [2024-12-08 21:04:03.411836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.569 [2024-12-08 21:04:03.412014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:42.569 [2024-12-08 21:04:03.412163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.846 ms 00:17:42.569 [2024-12-08 21:04:03.412319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.569 [2024-12-08 21:04:03.412443] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 25.439 ms, result 0 00:17:42.569 true 00:17:42.569 21:04:03 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:42.829 [2024-12-08 21:04:03.714025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.829 [2024-12-08 21:04:03.714261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:42.829 [2024-12-08 21:04:03.714413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.706 ms 00:17:42.829 [2024-12-08 21:04:03.714466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.829 [2024-12-08 21:04:03.714588] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 26.276 ms, result 0 00:17:42.829 true 00:17:42.829 21:04:03 -- ftl/trim.sh@81 -- # killprocess 72896 00:17:42.829 21:04:03 -- common/autotest_common.sh@936 -- # '[' -z 72896 ']' 00:17:42.829 21:04:03 -- common/autotest_common.sh@940 -- # kill -0 72896 00:17:42.829 21:04:03 -- common/autotest_common.sh@941 -- # uname 00:17:42.829 21:04:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:42.829 21:04:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72896 00:17:42.829 killing process with pid 72896 00:17:42.829 21:04:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:42.829 21:04:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:42.829 21:04:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72896' 00:17:42.829 21:04:03 -- common/autotest_common.sh@955 -- # kill 72896 00:17:42.829 21:04:03 -- common/autotest_common.sh@960 -- # wait 72896 00:17:43.772 [2024-12-08 21:04:04.493492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.493555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:43.772 [2024-12-08 21:04:04.493574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:43.772 [2024-12-08 21:04:04.493591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.493618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:43.772 [2024-12-08 21:04:04.496306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.496337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:43.772 [2024-12-08 21:04:04.496356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:17:43.772 [2024-12-08 21:04:04.496368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.496654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.496671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:43.772 [2024-12-08 21:04:04.496684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:17:43.772 [2024-12-08 21:04:04.496695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.500096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.500381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:43.772 [2024-12-08 21:04:04.500413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.360 ms 00:17:43.772 [2024-12-08 21:04:04.500426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.506607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.506657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:43.772 [2024-12-08 21:04:04.506676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.116 ms 00:17:43.772 [2024-12-08 21:04:04.506689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.516879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.516912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:43.772 [2024-12-08 21:04:04.516931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.135 ms 00:17:43.772 [2024-12-08 21:04:04.516942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.524348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.524388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:43.772 [2024-12-08 21:04:04.524406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.363 ms 00:17:43.772 [2024-12-08 21:04:04.524418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.524573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.524591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:43.772 [2024-12-08 21:04:04.524604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:43.772 [2024-12-08 21:04:04.524615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.535158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.535190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:43.772 [2024-12-08 21:04:04.535210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.513 ms 00:17:43.772 [2024-12-08 21:04:04.535222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.545314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.545346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:43.772 [2024-12-08 21:04:04.545373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.034 ms 00:17:43.772 [2024-12-08 21:04:04.545384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.555094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.555239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:43.772 [2024-12-08 21:04:04.555272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.652 ms 00:17:43.772 [2024-12-08 21:04:04.555285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.565206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.772 [2024-12-08 21:04:04.565239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:43.772 [2024-12-08 21:04:04.565259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.839 ms 00:17:43.772 [2024-12-08 21:04:04.565270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.772 [2024-12-08 21:04:04.565313] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:43.772 [2024-12-08 21:04:04.565333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:43.772 [2024-12-08 21:04:04.565496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.565998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:43.773 [2024-12-08 21:04:04.566867] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:43.774 [2024-12-08 21:04:04.566888] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4895114b-fe81-4086-8292-4d3881a52f6a 00:17:43.774 [2024-12-08 21:04:04.566901] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:43.774 [2024-12-08 21:04:04.566916] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:43.774 [2024-12-08 21:04:04.566928] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:43.774 [2024-12-08 21:04:04.566944] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:43.774 [2024-12-08 21:04:04.566956] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:43.774 [2024-12-08 21:04:04.566972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:43.774 [2024-12-08 21:04:04.566984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:43.774 [2024-12-08 21:04:04.566998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:43.774 [2024-12-08 21:04:04.567009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:43.774 [2024-12-08 21:04:04.567025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.774 [2024-12-08 21:04:04.567038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:43.774 [2024-12-08 21:04:04.567054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:17:43.774 [2024-12-08 21:04:04.567082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.580164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.774 [2024-12-08 21:04:04.580310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:43.774 [2024-12-08 21:04:04.580349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.016 ms 00:17:43.774 [2024-12-08 21:04:04.580363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.580600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.774 [2024-12-08 21:04:04.580617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:43.774 [2024-12-08 21:04:04.580641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:17:43.774 [2024-12-08 21:04:04.580669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.625548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.625586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.774 [2024-12-08 21:04:04.625604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.625615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.625697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.625713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.774 [2024-12-08 21:04:04.625729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.625740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.625799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.625817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.774 [2024-12-08 21:04:04.625832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.625842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.625868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.625880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.774 [2024-12-08 21:04:04.625892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.625904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.704889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.705183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.774 [2024-12-08 21:04:04.705216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.705230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.734964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.735001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.774 [2024-12-08 21:04:04.735023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.735034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.735142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.735162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.774 [2024-12-08 21:04:04.735177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.735189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.735240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.735253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.774 [2024-12-08 21:04:04.735266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.735277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.735407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.735425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.774 [2024-12-08 21:04:04.735439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.735465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.735546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.735563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:43.774 [2024-12-08 21:04:04.735576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.735588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.735636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.735657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.774 [2024-12-08 21:04:04.735674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.735685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.735739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.774 [2024-12-08 21:04:04.735753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.774 [2024-12-08 21:04:04.735767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.774 [2024-12-08 21:04:04.735778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.774 [2024-12-08 21:04:04.735924] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 242.411 ms, result 0 00:17:44.711 21:04:05 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:44.711 21:04:05 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:44.711 [2024-12-08 21:04:05.695763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:44.711 [2024-12-08 21:04:05.696179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72949 ] 00:17:44.969 [2024-12-08 21:04:05.866614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.227 [2024-12-08 21:04:06.018854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.227 [2024-12-08 21:04:06.265588] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:45.227 [2024-12-08 21:04:06.265657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:45.488 [2024-12-08 21:04:06.414742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.488 [2024-12-08 21:04:06.414788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:45.488 [2024-12-08 21:04:06.414822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:45.488 [2024-12-08 21:04:06.414832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.488 [2024-12-08 21:04:06.417859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.488 [2024-12-08 21:04:06.417898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.488 [2024-12-08 21:04:06.417929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.002 ms 00:17:45.488 [2024-12-08 21:04:06.417939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.488 [2024-12-08 21:04:06.418067] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:45.488 [2024-12-08 21:04:06.419068] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:45.488 [2024-12-08 21:04:06.419169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.488 [2024-12-08 21:04:06.419198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.488 [2024-12-08 21:04:06.419209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:17:45.488 [2024-12-08 21:04:06.419220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.488 [2024-12-08 21:04:06.420500] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:45.488 [2024-12-08 21:04:06.435525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.488 [2024-12-08 21:04:06.435561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:45.488 [2024-12-08 21:04:06.435576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.027 ms 00:17:45.488 [2024-12-08 21:04:06.435586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.488 [2024-12-08 21:04:06.435686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.488 [2024-12-08 21:04:06.435704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:45.488 [2024-12-08 21:04:06.435715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:45.488 [2024-12-08 21:04:06.435725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.488 [2024-12-08 21:04:06.440157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.488 [2024-12-08 21:04:06.440203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:45.488 [2024-12-08 21:04:06.440239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.385 ms 00:17:45.488 [2024-12-08 21:04:06.440280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.488 [2024-12-08 21:04:06.440406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.488 [2024-12-08 21:04:06.440426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:45.488 [2024-12-08 21:04:06.440470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:45.488 [2024-12-08 21:04:06.440482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.488 [2024-12-08 21:04:06.440552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.488 [2024-12-08 21:04:06.440601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:45.488 [2024-12-08 21:04:06.440612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:45.488 [2024-12-08 21:04:06.440637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.488 [2024-12-08 21:04:06.440678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:45.488 [2024-12-08 21:04:06.444596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.488 [2024-12-08 21:04:06.444646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:45.488 [2024-12-08 21:04:06.444659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.935 ms 00:17:45.488 [2024-12-08 21:04:06.444673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.489 [2024-12-08 21:04:06.444728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.489 [2024-12-08 21:04:06.444743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:45.489 [2024-12-08 21:04:06.444753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:45.489 [2024-12-08 21:04:06.444762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.489 [2024-12-08 21:04:06.444783] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:45.489 [2024-12-08 21:04:06.444805] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:45.489 [2024-12-08 21:04:06.444837] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:45.489 [2024-12-08 21:04:06.444856] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:45.489 [2024-12-08 21:04:06.444918] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:45.489 [2024-12-08 21:04:06.444931] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:45.489 [2024-12-08 21:04:06.444942] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:45.489 [2024-12-08 21:04:06.444953] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:45.489 [2024-12-08 21:04:06.444964] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:45.489 [2024-12-08 21:04:06.444973] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:45.489 [2024-12-08 21:04:06.444982] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:45.489 [2024-12-08 21:04:06.444990] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:45.489 [2024-12-08 21:04:06.445002] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:45.489 [2024-12-08 21:04:06.445010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.489 [2024-12-08 21:04:06.445019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:45.489 [2024-12-08 21:04:06.445028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:17:45.489 [2024-12-08 21:04:06.445037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.489 [2024-12-08 21:04:06.445154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.489 [2024-12-08 21:04:06.445172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:45.489 [2024-12-08 21:04:06.445183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:45.489 [2024-12-08 21:04:06.445193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.489 [2024-12-08 21:04:06.445275] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:45.489 [2024-12-08 21:04:06.445291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:45.489 [2024-12-08 21:04:06.445302] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:45.489 [2024-12-08 21:04:06.445312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445322] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:45.489 [2024-12-08 21:04:06.445331] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:45.489 [2024-12-08 21:04:06.445350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:45.489 [2024-12-08 21:04:06.445359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:45.489 [2024-12-08 21:04:06.445377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:45.489 [2024-12-08 21:04:06.445386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:45.489 [2024-12-08 21:04:06.445394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:45.489 [2024-12-08 21:04:06.445403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:45.489 [2024-12-08 21:04:06.445423] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:45.489 [2024-12-08 21:04:06.445433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445458] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:45.489 [2024-12-08 21:04:06.445482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:45.489 [2024-12-08 21:04:06.445490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:45.489 [2024-12-08 21:04:06.445506] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:45.489 [2024-12-08 21:04:06.445514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:45.489 [2024-12-08 21:04:06.445522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:45.489 [2024-12-08 21:04:06.445530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:45.489 [2024-12-08 21:04:06.445545] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:45.489 [2024-12-08 21:04:06.445553] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:45.489 [2024-12-08 21:04:06.445569] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:45.489 [2024-12-08 21:04:06.445577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:45.489 [2024-12-08 21:04:06.445592] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:45.489 [2024-12-08 21:04:06.445600] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:45.489 [2024-12-08 21:04:06.445615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:45.489 [2024-12-08 21:04:06.445623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:45.489 [2024-12-08 21:04:06.445639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:45.489 [2024-12-08 21:04:06.445647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:45.489 [2024-12-08 21:04:06.445655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:45.489 [2024-12-08 21:04:06.445662] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:45.489 [2024-12-08 21:04:06.445671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:45.489 [2024-12-08 21:04:06.445680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:45.489 [2024-12-08 21:04:06.445693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:45.489 [2024-12-08 21:04:06.445702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:45.489 [2024-12-08 21:04:06.445710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:45.489 [2024-12-08 21:04:06.445718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:45.489 [2024-12-08 21:04:06.445727] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:45.489 [2024-12-08 21:04:06.445736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:45.489 [2024-12-08 21:04:06.445744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:45.489 [2024-12-08 21:04:06.445753] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:45.489 [2024-12-08 21:04:06.445764] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:45.489 [2024-12-08 21:04:06.445774] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:45.489 [2024-12-08 21:04:06.445783] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:45.489 [2024-12-08 21:04:06.445791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:45.489 [2024-12-08 21:04:06.445800] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:45.489 [2024-12-08 21:04:06.445809] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:45.489 [2024-12-08 21:04:06.445833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:45.489 [2024-12-08 21:04:06.445843] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:45.489 [2024-12-08 21:04:06.445852] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:45.489 [2024-12-08 21:04:06.445861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:45.489 [2024-12-08 21:04:06.445870] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:45.489 [2024-12-08 21:04:06.445879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:45.489 [2024-12-08 21:04:06.445888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:45.489 [2024-12-08 21:04:06.445898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:45.489 [2024-12-08 21:04:06.445906] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:45.489 [2024-12-08 21:04:06.445921] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:45.489 [2024-12-08 21:04:06.445931] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:45.489 [2024-12-08 21:04:06.445941] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:45.489 [2024-12-08 21:04:06.445950] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:45.489 [2024-12-08 21:04:06.445961] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:45.489 [2024-12-08 21:04:06.445971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.490 [2024-12-08 21:04:06.445980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:45.490 [2024-12-08 21:04:06.445990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:17:45.490 [2024-12-08 21:04:06.445998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.490 [2024-12-08 21:04:06.462037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.490 [2024-12-08 21:04:06.462276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.490 [2024-12-08 21:04:06.462390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.972 ms 00:17:45.490 [2024-12-08 21:04:06.462439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.490 [2024-12-08 21:04:06.462671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.490 [2024-12-08 21:04:06.462777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:45.490 [2024-12-08 21:04:06.462877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:45.490 [2024-12-08 21:04:06.462983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.490 [2024-12-08 21:04:06.504713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.490 [2024-12-08 21:04:06.504881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.490 [2024-12-08 21:04:06.505029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.664 ms 00:17:45.490 [2024-12-08 21:04:06.505087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.490 [2024-12-08 21:04:06.505263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.490 [2024-12-08 21:04:06.505314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:45.490 [2024-12-08 21:04:06.505501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:45.490 [2024-12-08 21:04:06.505546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.490 [2024-12-08 21:04:06.505880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.490 [2024-12-08 21:04:06.505988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:45.490 [2024-12-08 21:04:06.506009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:17:45.490 [2024-12-08 21:04:06.506026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.490 [2024-12-08 21:04:06.506231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.490 [2024-12-08 21:04:06.506251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:45.490 [2024-12-08 21:04:06.506264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:17:45.490 [2024-12-08 21:04:06.506274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.490 [2024-12-08 21:04:06.520208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.490 [2024-12-08 21:04:06.520409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:45.490 [2024-12-08 21:04:06.520434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.886 ms 00:17:45.490 [2024-12-08 21:04:06.520453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.533960] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:45.749 [2024-12-08 21:04:06.533996] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:45.749 [2024-12-08 21:04:06.534028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.534038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:45.749 [2024-12-08 21:04:06.534065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.420 ms 00:17:45.749 [2024-12-08 21:04:06.534075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.557648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.557691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:45.749 [2024-12-08 21:04:06.557706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.462 ms 00:17:45.749 [2024-12-08 21:04:06.557715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.570165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.570308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:45.749 [2024-12-08 21:04:06.570344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.374 ms 00:17:45.749 [2024-12-08 21:04:06.570355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.582703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.582739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:45.749 [2024-12-08 21:04:06.582753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.247 ms 00:17:45.749 [2024-12-08 21:04:06.582762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.583162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.583182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:45.749 [2024-12-08 21:04:06.583208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:45.749 [2024-12-08 21:04:06.583222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.641669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.641723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:45.749 [2024-12-08 21:04:06.641739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.416 ms 00:17:45.749 [2024-12-08 21:04:06.641755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.651663] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:45.749 [2024-12-08 21:04:06.662646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.662882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:45.749 [2024-12-08 21:04:06.662909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.786 ms 00:17:45.749 [2024-12-08 21:04:06.662920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.663034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.663051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:45.749 [2024-12-08 21:04:06.663067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:45.749 [2024-12-08 21:04:06.663137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.663204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.663220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:45.749 [2024-12-08 21:04:06.663246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:45.749 [2024-12-08 21:04:06.663256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.665085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.665129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:45.749 [2024-12-08 21:04:06.665159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.803 ms 00:17:45.749 [2024-12-08 21:04:06.665169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.665204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.665222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:45.749 [2024-12-08 21:04:06.665233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:45.749 [2024-12-08 21:04:06.665243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.665279] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:45.749 [2024-12-08 21:04:06.665293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.665302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:45.749 [2024-12-08 21:04:06.665312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:45.749 [2024-12-08 21:04:06.665321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.689801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.689851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:45.749 [2024-12-08 21:04:06.689884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.453 ms 00:17:45.749 [2024-12-08 21:04:06.689894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.689986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.749 [2024-12-08 21:04:06.690003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:45.749 [2024-12-08 21:04:06.690014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:45.749 [2024-12-08 21:04:06.690024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.749 [2024-12-08 21:04:06.691303] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:45.749 [2024-12-08 21:04:06.694649] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.118 ms, result 0 00:17:45.749 [2024-12-08 21:04:06.695444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:45.749 [2024-12-08 21:04:06.709772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.688  [2024-12-08T21:04:09.112Z] Copying: 24/256 [MB] (24 MBps) [2024-12-08T21:04:10.048Z] Copying: 46/256 [MB] (22 MBps) [2024-12-08T21:04:10.984Z] Copying: 68/256 [MB] (22 MBps) [2024-12-08T21:04:11.921Z] Copying: 90/256 [MB] (22 MBps) [2024-12-08T21:04:12.859Z] Copying: 112/256 [MB] (21 MBps) [2024-12-08T21:04:13.797Z] Copying: 133/256 [MB] (20 MBps) [2024-12-08T21:04:14.733Z] Copying: 154/256 [MB] (21 MBps) [2024-12-08T21:04:16.109Z] Copying: 176/256 [MB] (21 MBps) [2024-12-08T21:04:17.046Z] Copying: 199/256 [MB] (22 MBps) [2024-12-08T21:04:18.017Z] Copying: 221/256 [MB] (22 MBps) [2024-12-08T21:04:18.276Z] Copying: 243/256 [MB] (22 MBps) [2024-12-08T21:04:18.276Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-08 21:04:18.275284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.494 [2024-12-08 21:04:18.285727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.494 [2024-12-08 21:04:18.285770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.494 [2024-12-08 21:04:18.285787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:57.494 [2024-12-08 21:04:18.285797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.494 [2024-12-08 21:04:18.285822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:57.494 [2024-12-08 21:04:18.288593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.494 [2024-12-08 21:04:18.288737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.494 [2024-12-08 21:04:18.288759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.753 ms 00:17:57.494 [2024-12-08 21:04:18.288770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.494 [2024-12-08 21:04:18.289019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.494 [2024-12-08 21:04:18.289050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.494 [2024-12-08 21:04:18.289061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:17:57.494 [2024-12-08 21:04:18.289074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.494 [2024-12-08 21:04:18.292070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.494 [2024-12-08 21:04:18.292243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.494 [2024-12-08 21:04:18.292285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:17:57.494 [2024-12-08 21:04:18.292297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.494 [2024-12-08 21:04:18.298294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.494 [2024-12-08 21:04:18.298322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:57.494 [2024-12-08 21:04:18.298335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.954 ms 00:17:57.494 [2024-12-08 21:04:18.298344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.494 [2024-12-08 21:04:18.322629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.494 [2024-12-08 21:04:18.322666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.494 [2024-12-08 21:04:18.322681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.218 ms 00:17:57.494 [2024-12-08 21:04:18.322690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.495 [2024-12-08 21:04:18.337348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.495 [2024-12-08 21:04:18.337392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.495 [2024-12-08 21:04:18.337407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.605 ms 00:17:57.495 [2024-12-08 21:04:18.337417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.495 [2024-12-08 21:04:18.337558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.495 [2024-12-08 21:04:18.337576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.495 [2024-12-08 21:04:18.337587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:57.495 [2024-12-08 21:04:18.337596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.495 [2024-12-08 21:04:18.361825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.495 [2024-12-08 21:04:18.361861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:57.495 [2024-12-08 21:04:18.361875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.210 ms 00:17:57.495 [2024-12-08 21:04:18.361883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.495 [2024-12-08 21:04:18.385843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.495 [2024-12-08 21:04:18.385878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:57.495 [2024-12-08 21:04:18.385892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.907 ms 00:17:57.495 [2024-12-08 21:04:18.385901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.495 [2024-12-08 21:04:18.409562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.495 [2024-12-08 21:04:18.409596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.495 [2024-12-08 21:04:18.409610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.609 ms 00:17:57.495 [2024-12-08 21:04:18.409619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.495 [2024-12-08 21:04:18.433455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.495 [2024-12-08 21:04:18.433490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.495 [2024-12-08 21:04:18.433503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.758 ms 00:17:57.495 [2024-12-08 21:04:18.433512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.495 [2024-12-08 21:04:18.433564] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.495 [2024-12-08 21:04:18.433585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.433992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.495 [2024-12-08 21:04:18.434238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.496 [2024-12-08 21:04:18.434595] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.496 [2024-12-08 21:04:18.434605] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4895114b-fe81-4086-8292-4d3881a52f6a 00:17:57.496 [2024-12-08 21:04:18.434617] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.496 [2024-12-08 21:04:18.434626] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.496 [2024-12-08 21:04:18.434634] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.496 [2024-12-08 21:04:18.434643] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.496 [2024-12-08 21:04:18.434652] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.496 [2024-12-08 21:04:18.434667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.496 [2024-12-08 21:04:18.434676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.496 [2024-12-08 21:04:18.434684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.496 [2024-12-08 21:04:18.434692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.496 [2024-12-08 21:04:18.434701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.496 [2024-12-08 21:04:18.434710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.496 [2024-12-08 21:04:18.434720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:17:57.496 [2024-12-08 21:04:18.434730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.496 [2024-12-08 21:04:18.447644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.496 [2024-12-08 21:04:18.447675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.496 [2024-12-08 21:04:18.447695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.876 ms 00:17:57.496 [2024-12-08 21:04:18.447704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.496 [2024-12-08 21:04:18.447913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.496 [2024-12-08 21:04:18.447928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.496 [2024-12-08 21:04:18.447938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:17:57.496 [2024-12-08 21:04:18.447947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.496 [2024-12-08 21:04:18.485277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.496 [2024-12-08 21:04:18.485312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.496 [2024-12-08 21:04:18.485348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.496 [2024-12-08 21:04:18.485358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.496 [2024-12-08 21:04:18.485439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.496 [2024-12-08 21:04:18.485454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.496 [2024-12-08 21:04:18.485464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.496 [2024-12-08 21:04:18.485488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.496 [2024-12-08 21:04:18.485534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.496 [2024-12-08 21:04:18.485549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.496 [2024-12-08 21:04:18.485559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.496 [2024-12-08 21:04:18.485572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.496 [2024-12-08 21:04:18.485593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.496 [2024-12-08 21:04:18.485604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.496 [2024-12-08 21:04:18.485613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.496 [2024-12-08 21:04:18.485621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.756 [2024-12-08 21:04:18.562336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.756 [2024-12-08 21:04:18.562387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.756 [2024-12-08 21:04:18.562407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.756 [2024-12-08 21:04:18.562417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.756 [2024-12-08 21:04:18.592304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.756 [2024-12-08 21:04:18.592477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.756 [2024-12-08 21:04:18.592504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.756 [2024-12-08 21:04:18.592517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.756 [2024-12-08 21:04:18.592626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.756 [2024-12-08 21:04:18.592658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.756 [2024-12-08 21:04:18.592668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.756 [2024-12-08 21:04:18.592677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.756 [2024-12-08 21:04:18.592714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.756 [2024-12-08 21:04:18.592726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.756 [2024-12-08 21:04:18.592735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.756 [2024-12-08 21:04:18.592744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.756 [2024-12-08 21:04:18.592859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.756 [2024-12-08 21:04:18.592874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.756 [2024-12-08 21:04:18.592884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.756 [2024-12-08 21:04:18.592893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.756 [2024-12-08 21:04:18.592937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.756 [2024-12-08 21:04:18.592951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:57.756 [2024-12-08 21:04:18.592961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.756 [2024-12-08 21:04:18.592969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.756 [2024-12-08 21:04:18.593007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.756 [2024-12-08 21:04:18.593020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.756 [2024-12-08 21:04:18.593029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.756 [2024-12-08 21:04:18.593038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.756 [2024-12-08 21:04:18.593085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.756 [2024-12-08 21:04:18.593101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.756 [2024-12-08 21:04:18.593111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.756 [2024-12-08 21:04:18.593120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.756 [2024-12-08 21:04:18.593323] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 307.596 ms, result 0 00:17:58.692 00:17:58.692 00:17:58.692 21:04:19 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:58.692 21:04:19 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:58.952 21:04:19 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:59.213 [2024-12-08 21:04:20.087469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:59.213 [2024-12-08 21:04:20.087643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73098 ] 00:17:59.472 [2024-12-08 21:04:20.256113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.472 [2024-12-08 21:04:20.399551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.732 [2024-12-08 21:04:20.647068] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:59.732 [2024-12-08 21:04:20.647151] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:59.993 [2024-12-08 21:04:20.797366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-12-08 21:04:20.797410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:59.993 [2024-12-08 21:04:20.797427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:59.993 [2024-12-08 21:04:20.797436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-12-08 21:04:20.799955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-12-08 21:04:20.799994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.993 [2024-12-08 21:04:20.800008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.494 ms 00:17:59.993 [2024-12-08 21:04:20.800026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-12-08 21:04:20.800186] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:59.993 [2024-12-08 21:04:20.801149] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:59.993 [2024-12-08 21:04:20.801185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-12-08 21:04:20.801198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.993 [2024-12-08 21:04:20.801208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:17:59.993 [2024-12-08 21:04:20.801218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-12-08 21:04:20.802318] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:59.993 [2024-12-08 21:04:20.815081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-12-08 21:04:20.815115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:59.993 [2024-12-08 21:04:20.815130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.764 ms 00:17:59.993 [2024-12-08 21:04:20.815139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-12-08 21:04:20.815238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-12-08 21:04:20.815257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:59.993 [2024-12-08 21:04:20.815267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:59.993 [2024-12-08 21:04:20.815276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-12-08 21:04:20.819148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-12-08 21:04:20.819181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.993 [2024-12-08 21:04:20.819194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.825 ms 00:17:59.993 [2024-12-08 21:04:20.819208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-12-08 21:04:20.819315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-12-08 21:04:20.819332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.993 [2024-12-08 21:04:20.819343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:59.993 [2024-12-08 21:04:20.819352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-12-08 21:04:20.819384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-12-08 21:04:20.819397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:59.993 [2024-12-08 21:04:20.819407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:59.993 [2024-12-08 21:04:20.819416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-12-08 21:04:20.819451] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:59.993 [2024-12-08 21:04:20.823190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-12-08 21:04:20.823223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.993 [2024-12-08 21:04:20.823253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.756 ms 00:17:59.993 [2024-12-08 21:04:20.823267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-12-08 21:04:20.823325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-12-08 21:04:20.823342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:59.993 [2024-12-08 21:04:20.823353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:59.993 [2024-12-08 21:04:20.823362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-12-08 21:04:20.823384] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:59.993 [2024-12-08 21:04:20.823409] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:59.993 [2024-12-08 21:04:20.823444] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:59.993 [2024-12-08 21:04:20.823464] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:59.993 [2024-12-08 21:04:20.823530] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:59.993 [2024-12-08 21:04:20.823544] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:59.993 [2024-12-08 21:04:20.823557] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:59.993 [2024-12-08 21:04:20.823569] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:59.993 [2024-12-08 21:04:20.823580] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:59.994 [2024-12-08 21:04:20.823590] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:59.994 [2024-12-08 21:04:20.823599] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:59.994 [2024-12-08 21:04:20.823608] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:59.994 [2024-12-08 21:04:20.823621] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:59.994 [2024-12-08 21:04:20.823631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.994 [2024-12-08 21:04:20.823640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:59.994 [2024-12-08 21:04:20.823650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:17:59.994 [2024-12-08 21:04:20.823660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.994 [2024-12-08 21:04:20.823726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.994 [2024-12-08 21:04:20.823741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:59.994 [2024-12-08 21:04:20.823751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:59.994 [2024-12-08 21:04:20.823760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.994 [2024-12-08 21:04:20.823834] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:59.994 [2024-12-08 21:04:20.823849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:59.994 [2024-12-08 21:04:20.823860] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:59.994 [2024-12-08 21:04:20.823870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.994 [2024-12-08 21:04:20.823880] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:59.994 [2024-12-08 21:04:20.823889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:59.994 [2024-12-08 21:04:20.823898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:59.994 [2024-12-08 21:04:20.823907] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:59.994 [2024-12-08 21:04:20.823917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:59.994 [2024-12-08 21:04:20.823925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:59.994 [2024-12-08 21:04:20.823933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:59.994 [2024-12-08 21:04:20.823942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:59.994 [2024-12-08 21:04:20.823950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:59.994 [2024-12-08 21:04:20.823960] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:59.994 [2024-12-08 21:04:20.823980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:59.994 [2024-12-08 21:04:20.823989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.994 [2024-12-08 21:04:20.823997] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:59.994 [2024-12-08 21:04:20.824006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:59.994 [2024-12-08 21:04:20.824015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.994 [2024-12-08 21:04:20.824023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:59.994 [2024-12-08 21:04:20.824031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:59.994 [2024-12-08 21:04:20.824040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:59.994 [2024-12-08 21:04:20.824065] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:59.994 [2024-12-08 21:04:20.824073] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:59.994 [2024-12-08 21:04:20.824082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:59.994 [2024-12-08 21:04:20.824091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:59.994 [2024-12-08 21:04:20.824100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:59.994 [2024-12-08 21:04:20.824373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:59.994 [2024-12-08 21:04:20.824414] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:59.994 [2024-12-08 21:04:20.824449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:59.994 [2024-12-08 21:04:20.824481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:59.994 [2024-12-08 21:04:20.824616] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:59.994 [2024-12-08 21:04:20.824676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:59.994 [2024-12-08 21:04:20.824710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:59.994 [2024-12-08 21:04:20.824743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:59.994 [2024-12-08 21:04:20.824878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:59.994 [2024-12-08 21:04:20.824913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:59.994 [2024-12-08 21:04:20.825020] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:59.994 [2024-12-08 21:04:20.825065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:59.994 [2024-12-08 21:04:20.825192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:59.994 [2024-12-08 21:04:20.825237] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:59.994 [2024-12-08 21:04:20.825271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:59.994 [2024-12-08 21:04:20.825381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:59.994 [2024-12-08 21:04:20.825433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.994 [2024-12-08 21:04:20.825467] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:59.994 [2024-12-08 21:04:20.825496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:59.994 [2024-12-08 21:04:20.825505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:59.994 [2024-12-08 21:04:20.825515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:59.994 [2024-12-08 21:04:20.825523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:59.994 [2024-12-08 21:04:20.825532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:59.994 [2024-12-08 21:04:20.825543] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:59.994 [2024-12-08 21:04:20.825555] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:59.994 [2024-12-08 21:04:20.825566] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:59.994 [2024-12-08 21:04:20.825576] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:59.994 [2024-12-08 21:04:20.825586] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:59.994 [2024-12-08 21:04:20.825596] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:59.994 [2024-12-08 21:04:20.825605] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:59.994 [2024-12-08 21:04:20.825614] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:59.994 [2024-12-08 21:04:20.825624] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:59.994 [2024-12-08 21:04:20.825634] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:59.994 [2024-12-08 21:04:20.825643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:59.994 [2024-12-08 21:04:20.825652] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:59.994 [2024-12-08 21:04:20.825662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:59.994 [2024-12-08 21:04:20.825671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:59.995 [2024-12-08 21:04:20.825682] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:59.995 [2024-12-08 21:04:20.825691] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:59.995 [2024-12-08 21:04:20.825709] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:59.995 [2024-12-08 21:04:20.825719] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:59.995 [2024-12-08 21:04:20.825729] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:59.995 [2024-12-08 21:04:20.825739] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:59.995 [2024-12-08 21:04:20.825749] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:59.995 [2024-12-08 21:04:20.825761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.825772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:59.995 [2024-12-08 21:04:20.825783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.963 ms 00:17:59.995 [2024-12-08 21:04:20.825792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.840804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.840842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.995 [2024-12-08 21:04:20.840857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.934 ms 00:17:59.995 [2024-12-08 21:04:20.840868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.840982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.840998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:59.995 [2024-12-08 21:04:20.841009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:59.995 [2024-12-08 21:04:20.841018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.880725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.880767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.995 [2024-12-08 21:04:20.880784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.681 ms 00:17:59.995 [2024-12-08 21:04:20.880793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.880877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.880894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.995 [2024-12-08 21:04:20.880911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:59.995 [2024-12-08 21:04:20.880920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.881257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.881274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.995 [2024-12-08 21:04:20.881286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:17:59.995 [2024-12-08 21:04:20.881296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.881427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.881468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.995 [2024-12-08 21:04:20.881496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:59.995 [2024-12-08 21:04:20.881507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.895332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.895367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.995 [2024-12-08 21:04:20.895382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.778 ms 00:17:59.995 [2024-12-08 21:04:20.895395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.908504] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:59.995 [2024-12-08 21:04:20.908660] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:59.995 [2024-12-08 21:04:20.908681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.908692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:59.995 [2024-12-08 21:04:20.908703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.179 ms 00:17:59.995 [2024-12-08 21:04:20.908713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.931937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.932095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:59.995 [2024-12-08 21:04:20.932121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.143 ms 00:17:59.995 [2024-12-08 21:04:20.932133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.944836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.944870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:59.995 [2024-12-08 21:04:20.944895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.618 ms 00:17:59.995 [2024-12-08 21:04:20.944904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.957169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.957202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:59.995 [2024-12-08 21:04:20.957216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.192 ms 00:17:59.995 [2024-12-08 21:04:20.957225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:20.957586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:20.957605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:59.995 [2024-12-08 21:04:20.957616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:59.995 [2024-12-08 21:04:20.957629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:21.017243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.995 [2024-12-08 21:04:21.017301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:59.995 [2024-12-08 21:04:21.017334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.586 ms 00:17:59.995 [2024-12-08 21:04:21.017352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.995 [2024-12-08 21:04:21.027367] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:00.256 [2024-12-08 21:04:21.039372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.039421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:00.256 [2024-12-08 21:04:21.039437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.907 ms 00:18:00.256 [2024-12-08 21:04:21.039449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.039593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.039612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:00.256 [2024-12-08 21:04:21.039628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:00.256 [2024-12-08 21:04:21.039639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.039713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.039728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:00.256 [2024-12-08 21:04:21.039739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:00.256 [2024-12-08 21:04:21.039749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.041823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.041857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:00.256 [2024-12-08 21:04:21.041886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:18:00.256 [2024-12-08 21:04:21.041895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.041931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.041950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:00.256 [2024-12-08 21:04:21.041960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:00.256 [2024-12-08 21:04:21.041970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.042025] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:00.256 [2024-12-08 21:04:21.042039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.042048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:00.256 [2024-12-08 21:04:21.042057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:00.256 [2024-12-08 21:04:21.042066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.066436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.066472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:00.256 [2024-12-08 21:04:21.066488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.347 ms 00:18:00.256 [2024-12-08 21:04:21.066497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.066586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.066604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:00.256 [2024-12-08 21:04:21.066614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:00.256 [2024-12-08 21:04:21.066623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.067894] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.256 [2024-12-08 21:04:21.071405] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.199 ms, result 0 00:18:00.256 [2024-12-08 21:04:21.072270] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:00.256 [2024-12-08 21:04:21.086650] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.256  [2024-12-08T21:04:21.299Z] Copying: 4096/4096 [kB] (average 21 MBps)[2024-12-08 21:04:21.272871] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:00.256 [2024-12-08 21:04:21.282938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.283148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:00.256 [2024-12-08 21:04:21.283173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:00.256 [2024-12-08 21:04:21.283185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.283220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:00.256 [2024-12-08 21:04:21.286538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.286566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:00.256 [2024-12-08 21:04:21.286594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:18:00.256 [2024-12-08 21:04:21.286603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.288180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.288220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:00.256 [2024-12-08 21:04:21.288263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:18:00.256 [2024-12-08 21:04:21.288284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.256 [2024-12-08 21:04:21.292152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.256 [2024-12-08 21:04:21.292189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:00.256 [2024-12-08 21:04:21.292204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:18:00.256 [2024-12-08 21:04:21.292216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.517 [2024-12-08 21:04:21.299908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.517 [2024-12-08 21:04:21.299937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:00.517 [2024-12-08 21:04:21.299966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.627 ms 00:18:00.517 [2024-12-08 21:04:21.299982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.517 [2024-12-08 21:04:21.326514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.517 [2024-12-08 21:04:21.326549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:00.517 [2024-12-08 21:04:21.326580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.469 ms 00:18:00.517 [2024-12-08 21:04:21.326590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.517 [2024-12-08 21:04:21.342178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.517 [2024-12-08 21:04:21.342213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:00.517 [2024-12-08 21:04:21.342243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.535 ms 00:18:00.517 [2024-12-08 21:04:21.342254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.517 [2024-12-08 21:04:21.342402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.517 [2024-12-08 21:04:21.342421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:00.517 [2024-12-08 21:04:21.342433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:00.517 [2024-12-08 21:04:21.342443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.517 [2024-12-08 21:04:21.368690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.517 [2024-12-08 21:04:21.368725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:00.517 [2024-12-08 21:04:21.368755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.228 ms 00:18:00.517 [2024-12-08 21:04:21.368764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.517 [2024-12-08 21:04:21.394339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.517 [2024-12-08 21:04:21.394513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:00.517 [2024-12-08 21:04:21.394538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.507 ms 00:18:00.517 [2024-12-08 21:04:21.394549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.517 [2024-12-08 21:04:21.419830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.517 [2024-12-08 21:04:21.419865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:00.517 [2024-12-08 21:04:21.419895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.208 ms 00:18:00.517 [2024-12-08 21:04:21.419905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.517 [2024-12-08 21:04:21.445252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.517 [2024-12-08 21:04:21.445287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:00.517 [2024-12-08 21:04:21.445317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.267 ms 00:18:00.517 [2024-12-08 21:04:21.445327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.517 [2024-12-08 21:04:21.445380] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:00.517 [2024-12-08 21:04:21.445402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:00.517 [2024-12-08 21:04:21.445414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:00.517 [2024-12-08 21:04:21.445440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:00.517 [2024-12-08 21:04:21.445449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:00.517 [2024-12-08 21:04:21.445459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:00.517 [2024-12-08 21:04:21.445468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:00.517 [2024-12-08 21:04:21.445477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:00.517 [2024-12-08 21:04:21.445486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.445993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:00.518 [2024-12-08 21:04:21.446378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:00.519 [2024-12-08 21:04:21.446389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:00.519 [2024-12-08 21:04:21.446398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:00.519 [2024-12-08 21:04:21.446408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:00.519 [2024-12-08 21:04:21.446441] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:00.519 [2024-12-08 21:04:21.446451] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4895114b-fe81-4086-8292-4d3881a52f6a 00:18:00.519 [2024-12-08 21:04:21.446460] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:00.519 [2024-12-08 21:04:21.446470] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:00.519 [2024-12-08 21:04:21.446478] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:00.519 [2024-12-08 21:04:21.446488] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:00.519 [2024-12-08 21:04:21.446517] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:00.519 [2024-12-08 21:04:21.446527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:00.519 [2024-12-08 21:04:21.446537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:00.519 [2024-12-08 21:04:21.446546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:00.519 [2024-12-08 21:04:21.446555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:00.519 [2024-12-08 21:04:21.446565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.519 [2024-12-08 21:04:21.446589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:00.519 [2024-12-08 21:04:21.446599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:18:00.519 [2024-12-08 21:04:21.446609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.519 [2024-12-08 21:04:21.459697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.519 [2024-12-08 21:04:21.459728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:00.519 [2024-12-08 21:04:21.459748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.048 ms 00:18:00.519 [2024-12-08 21:04:21.459757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.519 [2024-12-08 21:04:21.459967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.519 [2024-12-08 21:04:21.459987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:00.519 [2024-12-08 21:04:21.459999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:18:00.519 [2024-12-08 21:04:21.460008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.519 [2024-12-08 21:04:21.498298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.519 [2024-12-08 21:04:21.498335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:00.519 [2024-12-08 21:04:21.498354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.519 [2024-12-08 21:04:21.498364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.519 [2024-12-08 21:04:21.498446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.519 [2024-12-08 21:04:21.498462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:00.519 [2024-12-08 21:04:21.498472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.519 [2024-12-08 21:04:21.498481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.519 [2024-12-08 21:04:21.498531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.519 [2024-12-08 21:04:21.498546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:00.519 [2024-12-08 21:04:21.498556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.519 [2024-12-08 21:04:21.498570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.519 [2024-12-08 21:04:21.498591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.519 [2024-12-08 21:04:21.498603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:00.519 [2024-12-08 21:04:21.498612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.519 [2024-12-08 21:04:21.498620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.779 [2024-12-08 21:04:21.575409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.779 [2024-12-08 21:04:21.575464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:00.779 [2024-12-08 21:04:21.575501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.779 [2024-12-08 21:04:21.575511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.779 [2024-12-08 21:04:21.605542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.779 [2024-12-08 21:04:21.605575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:00.779 [2024-12-08 21:04:21.605589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.779 [2024-12-08 21:04:21.605599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.779 [2024-12-08 21:04:21.605653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.779 [2024-12-08 21:04:21.605668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.779 [2024-12-08 21:04:21.605679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.779 [2024-12-08 21:04:21.605687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.779 [2024-12-08 21:04:21.605723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.779 [2024-12-08 21:04:21.605735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.779 [2024-12-08 21:04:21.605745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.779 [2024-12-08 21:04:21.605754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.779 [2024-12-08 21:04:21.605854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.779 [2024-12-08 21:04:21.605870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.779 [2024-12-08 21:04:21.605880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.779 [2024-12-08 21:04:21.605889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.779 [2024-12-08 21:04:21.605937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.779 [2024-12-08 21:04:21.605952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:00.779 [2024-12-08 21:04:21.605962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.779 [2024-12-08 21:04:21.605970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.779 [2024-12-08 21:04:21.606010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.779 [2024-12-08 21:04:21.606024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.779 [2024-12-08 21:04:21.606033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.779 [2024-12-08 21:04:21.606042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.779 [2024-12-08 21:04:21.606146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.779 [2024-12-08 21:04:21.606167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.779 [2024-12-08 21:04:21.606178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.779 [2024-12-08 21:04:21.606188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.779 [2024-12-08 21:04:21.606334] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 323.394 ms, result 0 00:18:01.717 00:18:01.717 00:18:01.717 21:04:22 -- ftl/trim.sh@93 -- # svcpid=73127 00:18:01.717 21:04:22 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:01.717 21:04:22 -- ftl/trim.sh@94 -- # waitforlisten 73127 00:18:01.717 21:04:22 -- common/autotest_common.sh@829 -- # '[' -z 73127 ']' 00:18:01.717 21:04:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.717 21:04:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.717 21:04:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.717 21:04:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.717 21:04:22 -- common/autotest_common.sh@10 -- # set +x 00:18:01.717 [2024-12-08 21:04:22.614514] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:01.717 [2024-12-08 21:04:22.614676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73127 ] 00:18:01.977 [2024-12-08 21:04:22.778717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.977 [2024-12-08 21:04:22.921311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:01.977 [2024-12-08 21:04:22.921543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.544 21:04:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.544 21:04:23 -- common/autotest_common.sh@862 -- # return 0 00:18:02.544 21:04:23 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:02.802 [2024-12-08 21:04:23.686139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:02.802 [2024-12-08 21:04:23.686366] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:03.062 [2024-12-08 21:04:23.853426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.062 [2024-12-08 21:04:23.853470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:03.062 [2024-12-08 21:04:23.853490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:03.062 [2024-12-08 21:04:23.853501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.062 [2024-12-08 21:04:23.856031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.062 [2024-12-08 21:04:23.856068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:03.062 [2024-12-08 21:04:23.856115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.507 ms 00:18:03.062 [2024-12-08 21:04:23.856126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.062 [2024-12-08 21:04:23.856290] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:03.062 [2024-12-08 21:04:23.857179] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:03.062 [2024-12-08 21:04:23.857219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.062 [2024-12-08 21:04:23.857233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:03.062 [2024-12-08 21:04:23.857245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:18:03.062 [2024-12-08 21:04:23.857255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.062 [2024-12-08 21:04:23.858374] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:03.062 [2024-12-08 21:04:23.871144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.062 [2024-12-08 21:04:23.871185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:03.062 [2024-12-08 21:04:23.871201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.775 ms 00:18:03.062 [2024-12-08 21:04:23.871213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.062 [2024-12-08 21:04:23.871306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.062 [2024-12-08 21:04:23.871326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:03.062 [2024-12-08 21:04:23.871352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:03.062 [2024-12-08 21:04:23.871363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.062 [2024-12-08 21:04:23.875303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.063 [2024-12-08 21:04:23.875471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:03.063 [2024-12-08 21:04:23.875497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.887 ms 00:18:03.063 [2024-12-08 21:04:23.875510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.063 [2024-12-08 21:04:23.875613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.063 [2024-12-08 21:04:23.875634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:03.063 [2024-12-08 21:04:23.875646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:03.063 [2024-12-08 21:04:23.875657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.063 [2024-12-08 21:04:23.875692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.063 [2024-12-08 21:04:23.875708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:03.063 [2024-12-08 21:04:23.875719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:03.063 [2024-12-08 21:04:23.875731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.063 [2024-12-08 21:04:23.875767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:03.063 [2024-12-08 21:04:23.879487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.063 [2024-12-08 21:04:23.879518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:03.063 [2024-12-08 21:04:23.879551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:18:03.063 [2024-12-08 21:04:23.879560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.063 [2024-12-08 21:04:23.879621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.063 [2024-12-08 21:04:23.879637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:03.063 [2024-12-08 21:04:23.879649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:03.063 [2024-12-08 21:04:23.879662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.063 [2024-12-08 21:04:23.879689] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:03.063 [2024-12-08 21:04:23.879711] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:03.063 [2024-12-08 21:04:23.879747] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:03.063 [2024-12-08 21:04:23.879765] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:03.063 [2024-12-08 21:04:23.879834] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:03.063 [2024-12-08 21:04:23.879848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:03.063 [2024-12-08 21:04:23.879867] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:03.063 [2024-12-08 21:04:23.879879] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:03.063 [2024-12-08 21:04:23.879892] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:03.063 [2024-12-08 21:04:23.879902] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:03.063 [2024-12-08 21:04:23.879913] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:03.063 [2024-12-08 21:04:23.879922] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:03.063 [2024-12-08 21:04:23.879933] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:03.063 [2024-12-08 21:04:23.879943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.063 [2024-12-08 21:04:23.879954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:03.063 [2024-12-08 21:04:23.879964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:03.063 [2024-12-08 21:04:23.879975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.063 [2024-12-08 21:04:23.880039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.063 [2024-12-08 21:04:23.880054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:03.063 [2024-12-08 21:04:23.880064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:03.063 [2024-12-08 21:04:23.880075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.063 [2024-12-08 21:04:23.880182] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:03.063 [2024-12-08 21:04:23.880202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:03.063 [2024-12-08 21:04:23.880237] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.063 [2024-12-08 21:04:23.880269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:03.063 [2024-12-08 21:04:23.880290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:03.063 [2024-12-08 21:04:23.880316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:03.063 [2024-12-08 21:04:23.880326] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.063 [2024-12-08 21:04:23.880346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:03.063 [2024-12-08 21:04:23.880358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:03.063 [2024-12-08 21:04:23.880367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.063 [2024-12-08 21:04:23.880378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:03.063 [2024-12-08 21:04:23.880387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:03.063 [2024-12-08 21:04:23.880399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880408] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:03.063 [2024-12-08 21:04:23.880419] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:03.063 [2024-12-08 21:04:23.880428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:03.063 [2024-12-08 21:04:23.880448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:03.063 [2024-12-08 21:04:23.880460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:03.063 [2024-12-08 21:04:23.880469] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:03.063 [2024-12-08 21:04:23.880498] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:03.063 [2024-12-08 21:04:23.880531] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:03.063 [2024-12-08 21:04:23.880541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:03.063 [2024-12-08 21:04:23.880577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:03.063 [2024-12-08 21:04:23.880602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:03.063 [2024-12-08 21:04:23.880622] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:03.063 [2024-12-08 21:04:23.880632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:03.063 [2024-12-08 21:04:23.880651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:03.063 [2024-12-08 21:04:23.880661] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.063 [2024-12-08 21:04:23.880681] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:03.063 [2024-12-08 21:04:23.880690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:03.063 [2024-12-08 21:04:23.880702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.063 [2024-12-08 21:04:23.880711] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:03.063 [2024-12-08 21:04:23.880725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:03.063 [2024-12-08 21:04:23.880734] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.063 [2024-12-08 21:04:23.880746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.063 [2024-12-08 21:04:23.880756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:03.063 [2024-12-08 21:04:23.880767] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:03.063 [2024-12-08 21:04:23.880776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:03.063 [2024-12-08 21:04:23.880788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:03.063 [2024-12-08 21:04:23.880797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:03.063 [2024-12-08 21:04:23.880807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:03.063 [2024-12-08 21:04:23.880818] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:03.063 [2024-12-08 21:04:23.880832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.063 [2024-12-08 21:04:23.880842] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:03.063 [2024-12-08 21:04:23.880854] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:03.064 [2024-12-08 21:04:23.880864] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:03.064 [2024-12-08 21:04:23.880878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:03.064 [2024-12-08 21:04:23.880888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:03.064 [2024-12-08 21:04:23.880899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:03.064 [2024-12-08 21:04:23.880909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:03.064 [2024-12-08 21:04:23.880921] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:03.064 [2024-12-08 21:04:23.880931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:03.064 [2024-12-08 21:04:23.880942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:03.064 [2024-12-08 21:04:23.880952] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:03.064 [2024-12-08 21:04:23.880964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:03.064 [2024-12-08 21:04:23.880974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:03.064 [2024-12-08 21:04:23.880985] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:03.064 [2024-12-08 21:04:23.880996] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.064 [2024-12-08 21:04:23.881008] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:03.064 [2024-12-08 21:04:23.881019] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:03.064 [2024-12-08 21:04:23.881030] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:03.064 [2024-12-08 21:04:23.881040] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:03.064 [2024-12-08 21:04:23.881054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.881064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:03.064 [2024-12-08 21:04:23.881076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:18:03.064 [2024-12-08 21:04:23.881100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:23.896139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.896177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.064 [2024-12-08 21:04:23.896198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.968 ms 00:18:03.064 [2024-12-08 21:04:23.896209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:23.896368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.896386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:03.064 [2024-12-08 21:04:23.896399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:03.064 [2024-12-08 21:04:23.896408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:23.926535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.926578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.064 [2024-12-08 21:04:23.926597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.100 ms 00:18:03.064 [2024-12-08 21:04:23.926607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:23.926689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.926707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.064 [2024-12-08 21:04:23.926719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:03.064 [2024-12-08 21:04:23.926728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:23.927013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.927028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.064 [2024-12-08 21:04:23.927043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:03.064 [2024-12-08 21:04:23.927053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:23.927234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.927252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.064 [2024-12-08 21:04:23.927268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:18:03.064 [2024-12-08 21:04:23.927278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:23.941832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.941869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.064 [2024-12-08 21:04:23.941889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.528 ms 00:18:03.064 [2024-12-08 21:04:23.941899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:23.954984] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:03.064 [2024-12-08 21:04:23.955035] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:03.064 [2024-12-08 21:04:23.955055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.955065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:03.064 [2024-12-08 21:04:23.955107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.044 ms 00:18:03.064 [2024-12-08 21:04:23.955119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:23.978262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.978297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:03.064 [2024-12-08 21:04:23.978331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.047 ms 00:18:03.064 [2024-12-08 21:04:23.978342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:23.990904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:23.991059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:03.064 [2024-12-08 21:04:23.991106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.469 ms 00:18:03.064 [2024-12-08 21:04:23.991118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:24.003577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:24.003611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:03.064 [2024-12-08 21:04:24.003631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.376 ms 00:18:03.064 [2024-12-08 21:04:24.003640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:24.003997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:24.004015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:03.064 [2024-12-08 21:04:24.004031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:18:03.064 [2024-12-08 21:04:24.004040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:24.062804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:24.062863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:03.064 [2024-12-08 21:04:24.062887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.735 ms 00:18:03.064 [2024-12-08 21:04:24.062897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:24.072767] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:03.064 [2024-12-08 21:04:24.083728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:24.083782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:03.064 [2024-12-08 21:04:24.083799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.731 ms 00:18:03.064 [2024-12-08 21:04:24.083810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:24.083908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:24.083929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:03.064 [2024-12-08 21:04:24.083940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:03.064 [2024-12-08 21:04:24.083954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:24.084005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:24.084022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:03.064 [2024-12-08 21:04:24.084032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:03.064 [2024-12-08 21:04:24.084043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:24.085727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:24.085763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:03.064 [2024-12-08 21:04:24.085777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:18:03.064 [2024-12-08 21:04:24.085788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:24.085821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:24.085839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:03.064 [2024-12-08 21:04:24.085849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:03.064 [2024-12-08 21:04:24.085860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.064 [2024-12-08 21:04:24.085896] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:03.064 [2024-12-08 21:04:24.085914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.064 [2024-12-08 21:04:24.085923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:03.064 [2024-12-08 21:04:24.085934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:03.064 [2024-12-08 21:04:24.085943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.323 [2024-12-08 21:04:24.111530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.323 [2024-12-08 21:04:24.111567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:03.323 [2024-12-08 21:04:24.111586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.557 ms 00:18:03.323 [2024-12-08 21:04:24.111596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.323 [2024-12-08 21:04:24.111715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.323 [2024-12-08 21:04:24.111733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:03.323 [2024-12-08 21:04:24.111747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:03.323 [2024-12-08 21:04:24.111759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.323 [2024-12-08 21:04:24.112900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:03.323 [2024-12-08 21:04:24.116354] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.068 ms, result 0 00:18:03.323 [2024-12-08 21:04:24.117677] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:03.323 Some configs were skipped because the RPC state that can call them passed over. 00:18:03.323 21:04:24 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:03.581 [2024-12-08 21:04:24.416164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.581 [2024-12-08 21:04:24.416385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:03.581 [2024-12-08 21:04:24.416510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.298 ms 00:18:03.581 [2024-12-08 21:04:24.416575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.581 [2024-12-08 21:04:24.416720] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 24.846 ms, result 0 00:18:03.581 true 00:18:03.581 21:04:24 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:03.839 [2024-12-08 21:04:24.633803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.839 [2024-12-08 21:04:24.633980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:03.839 [2024-12-08 21:04:24.634131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.270 ms 00:18:03.839 [2024-12-08 21:04:24.634184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.839 [2024-12-08 21:04:24.634280] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 25.741 ms, result 0 00:18:03.839 true 00:18:03.839 21:04:24 -- ftl/trim.sh@102 -- # killprocess 73127 00:18:03.839 21:04:24 -- common/autotest_common.sh@936 -- # '[' -z 73127 ']' 00:18:03.839 21:04:24 -- common/autotest_common.sh@940 -- # kill -0 73127 00:18:03.839 21:04:24 -- common/autotest_common.sh@941 -- # uname 00:18:03.839 21:04:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.839 21:04:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73127 00:18:03.839 killing process with pid 73127 00:18:03.839 21:04:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:03.839 21:04:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:03.839 21:04:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73127' 00:18:03.839 21:04:24 -- common/autotest_common.sh@955 -- # kill 73127 00:18:03.839 21:04:24 -- common/autotest_common.sh@960 -- # wait 73127 00:18:04.406 [2024-12-08 21:04:25.417450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.406 [2024-12-08 21:04:25.417519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:04.406 [2024-12-08 21:04:25.417539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:04.406 [2024-12-08 21:04:25.417550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.406 [2024-12-08 21:04:25.417579] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:04.406 [2024-12-08 21:04:25.420194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.406 [2024-12-08 21:04:25.420223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:04.406 [2024-12-08 21:04:25.420251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:18:04.406 [2024-12-08 21:04:25.420261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.406 [2024-12-08 21:04:25.420544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.406 [2024-12-08 21:04:25.420562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:04.406 [2024-12-08 21:04:25.420589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:18:04.406 [2024-12-08 21:04:25.420599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.406 [2024-12-08 21:04:25.423968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.406 [2024-12-08 21:04:25.424204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:04.406 [2024-12-08 21:04:25.424267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.344 ms 00:18:04.406 [2024-12-08 21:04:25.424297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.406 [2024-12-08 21:04:25.430509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.406 [2024-12-08 21:04:25.430552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:04.406 [2024-12-08 21:04:25.430568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:18:04.406 [2024-12-08 21:04:25.430578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.406 [2024-12-08 21:04:25.440547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.406 [2024-12-08 21:04:25.440610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:04.406 [2024-12-08 21:04:25.440644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.911 ms 00:18:04.406 [2024-12-08 21:04:25.440654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.666 [2024-12-08 21:04:25.449450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.666 [2024-12-08 21:04:25.449486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:04.666 [2024-12-08 21:04:25.449535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.753 ms 00:18:04.666 [2024-12-08 21:04:25.449545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.666 [2024-12-08 21:04:25.449681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.666 [2024-12-08 21:04:25.449699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:04.666 [2024-12-08 21:04:25.449711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:04.666 [2024-12-08 21:04:25.449720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.666 [2024-12-08 21:04:25.460725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.666 [2024-12-08 21:04:25.460871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:04.666 [2024-12-08 21:04:25.460900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.981 ms 00:18:04.666 [2024-12-08 21:04:25.460911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.666 [2024-12-08 21:04:25.471022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.666 [2024-12-08 21:04:25.471055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:04.666 [2024-12-08 21:04:25.471102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.050 ms 00:18:04.666 [2024-12-08 21:04:25.471115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.666 [2024-12-08 21:04:25.480890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.666 [2024-12-08 21:04:25.480921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:04.666 [2024-12-08 21:04:25.480937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.708 ms 00:18:04.666 [2024-12-08 21:04:25.480946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.666 [2024-12-08 21:04:25.490768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.666 [2024-12-08 21:04:25.490912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:04.666 [2024-12-08 21:04:25.490940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.757 ms 00:18:04.666 [2024-12-08 21:04:25.490951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.666 [2024-12-08 21:04:25.490996] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:04.666 [2024-12-08 21:04:25.491016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:04.666 [2024-12-08 21:04:25.491648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.491989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:04.667 [2024-12-08 21:04:25.492182] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:04.667 [2024-12-08 21:04:25.492195] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4895114b-fe81-4086-8292-4d3881a52f6a 00:18:04.667 [2024-12-08 21:04:25.492205] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:04.667 [2024-12-08 21:04:25.492215] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:04.667 [2024-12-08 21:04:25.492247] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:04.667 [2024-12-08 21:04:25.492278] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:04.667 [2024-12-08 21:04:25.492288] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:04.667 [2024-12-08 21:04:25.492300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:04.667 [2024-12-08 21:04:25.492309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:04.667 [2024-12-08 21:04:25.492319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:04.667 [2024-12-08 21:04:25.492328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:04.667 [2024-12-08 21:04:25.492340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.667 [2024-12-08 21:04:25.492350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:04.667 [2024-12-08 21:04:25.492361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:18:04.667 [2024-12-08 21:04:25.492373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.667 [2024-12-08 21:04:25.505849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.667 [2024-12-08 21:04:25.505882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:04.667 [2024-12-08 21:04:25.505901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.436 ms 00:18:04.667 [2024-12-08 21:04:25.505911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.667 [2024-12-08 21:04:25.506159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.667 [2024-12-08 21:04:25.506177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:04.667 [2024-12-08 21:04:25.506192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:18:04.667 [2024-12-08 21:04:25.506202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.667 [2024-12-08 21:04:25.550210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.667 [2024-12-08 21:04:25.550248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:04.667 [2024-12-08 21:04:25.550265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.667 [2024-12-08 21:04:25.550276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.667 [2024-12-08 21:04:25.550358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.667 [2024-12-08 21:04:25.550373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:04.667 [2024-12-08 21:04:25.550387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.667 [2024-12-08 21:04:25.550396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.667 [2024-12-08 21:04:25.550453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.667 [2024-12-08 21:04:25.550469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:04.667 [2024-12-08 21:04:25.550483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.667 [2024-12-08 21:04:25.550491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.667 [2024-12-08 21:04:25.550514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.667 [2024-12-08 21:04:25.550526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:04.667 [2024-12-08 21:04:25.550536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.667 [2024-12-08 21:04:25.550547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.667 [2024-12-08 21:04:25.629444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.667 [2024-12-08 21:04:25.629694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:04.667 [2024-12-08 21:04:25.629726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.667 [2024-12-08 21:04:25.629738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.667 [2024-12-08 21:04:25.659707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.667 [2024-12-08 21:04:25.659743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:04.667 [2024-12-08 21:04:25.659763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.667 [2024-12-08 21:04:25.659774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.667 [2024-12-08 21:04:25.659828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.667 [2024-12-08 21:04:25.659843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:04.668 [2024-12-08 21:04:25.659857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.668 [2024-12-08 21:04:25.659866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.668 [2024-12-08 21:04:25.659898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.668 [2024-12-08 21:04:25.659909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:04.668 [2024-12-08 21:04:25.659920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.668 [2024-12-08 21:04:25.659929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.668 [2024-12-08 21:04:25.660036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.668 [2024-12-08 21:04:25.660053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:04.668 [2024-12-08 21:04:25.660065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.668 [2024-12-08 21:04:25.660109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.668 [2024-12-08 21:04:25.660163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.668 [2024-12-08 21:04:25.660179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:04.668 [2024-12-08 21:04:25.660201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.668 [2024-12-08 21:04:25.660210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.668 [2024-12-08 21:04:25.660282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.668 [2024-12-08 21:04:25.660297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:04.668 [2024-12-08 21:04:25.660311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.668 [2024-12-08 21:04:25.660321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.668 [2024-12-08 21:04:25.660373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.668 [2024-12-08 21:04:25.660388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:04.668 [2024-12-08 21:04:25.660400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.668 [2024-12-08 21:04:25.660425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.668 [2024-12-08 21:04:25.660587] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 243.116 ms, result 0 00:18:05.602 21:04:26 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:05.602 [2024-12-08 21:04:26.617913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:05.602 [2024-12-08 21:04:26.618112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73181 ] 00:18:05.860 [2024-12-08 21:04:26.786903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.118 [2024-12-08 21:04:26.935959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.379 [2024-12-08 21:04:27.187769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:06.379 [2024-12-08 21:04:27.187864] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:06.379 [2024-12-08 21:04:27.338455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.338632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:06.379 [2024-12-08 21:04:27.338660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:06.379 [2024-12-08 21:04:27.338672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.341449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.341488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.379 [2024-12-08 21:04:27.341519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:18:06.379 [2024-12-08 21:04:27.341529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.341671] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:06.379 [2024-12-08 21:04:27.342483] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:06.379 [2024-12-08 21:04:27.342512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.342524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.379 [2024-12-08 21:04:27.342535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:18:06.379 [2024-12-08 21:04:27.342545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.343692] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:06.379 [2024-12-08 21:04:27.356390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.356427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:06.379 [2024-12-08 21:04:27.356459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.699 ms 00:18:06.379 [2024-12-08 21:04:27.356469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.356583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.356602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:06.379 [2024-12-08 21:04:27.356612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:06.379 [2024-12-08 21:04:27.356621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.360497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.360530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.379 [2024-12-08 21:04:27.360543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.829 ms 00:18:06.379 [2024-12-08 21:04:27.360558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.360663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.360681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.379 [2024-12-08 21:04:27.360692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:06.379 [2024-12-08 21:04:27.360701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.360732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.360744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:06.379 [2024-12-08 21:04:27.360754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:06.379 [2024-12-08 21:04:27.360763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.360797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:06.379 [2024-12-08 21:04:27.364477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.364526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.379 [2024-12-08 21:04:27.364554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:18:06.379 [2024-12-08 21:04:27.364583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.364638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.364653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:06.379 [2024-12-08 21:04:27.364663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:06.379 [2024-12-08 21:04:27.364672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.364692] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:06.379 [2024-12-08 21:04:27.364714] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:06.379 [2024-12-08 21:04:27.364747] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:06.379 [2024-12-08 21:04:27.364766] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:06.379 [2024-12-08 21:04:27.364830] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:06.379 [2024-12-08 21:04:27.364844] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:06.379 [2024-12-08 21:04:27.364856] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:06.379 [2024-12-08 21:04:27.364867] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:06.379 [2024-12-08 21:04:27.364878] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:06.379 [2024-12-08 21:04:27.364888] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:06.379 [2024-12-08 21:04:27.364896] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:06.379 [2024-12-08 21:04:27.364905] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:06.379 [2024-12-08 21:04:27.364917] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:06.379 [2024-12-08 21:04:27.364926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.364935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:06.379 [2024-12-08 21:04:27.364945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:18:06.379 [2024-12-08 21:04:27.364953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.365016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.379 [2024-12-08 21:04:27.365030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:06.379 [2024-12-08 21:04:27.365039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:06.379 [2024-12-08 21:04:27.365048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.379 [2024-12-08 21:04:27.365154] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:06.379 [2024-12-08 21:04:27.365172] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:06.379 [2024-12-08 21:04:27.365183] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.379 [2024-12-08 21:04:27.365193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.379 [2024-12-08 21:04:27.365202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:06.379 [2024-12-08 21:04:27.365210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:06.379 [2024-12-08 21:04:27.365219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:06.379 [2024-12-08 21:04:27.365229] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:06.379 [2024-12-08 21:04:27.365238] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:06.379 [2024-12-08 21:04:27.365246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.379 [2024-12-08 21:04:27.365255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:06.379 [2024-12-08 21:04:27.365264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:06.379 [2024-12-08 21:04:27.365273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.379 [2024-12-08 21:04:27.365281] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:06.379 [2024-12-08 21:04:27.365302] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:06.379 [2024-12-08 21:04:27.365312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.379 [2024-12-08 21:04:27.365321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:06.379 [2024-12-08 21:04:27.365330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:06.379 [2024-12-08 21:04:27.365338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.379 [2024-12-08 21:04:27.365346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:06.379 [2024-12-08 21:04:27.365355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:06.379 [2024-12-08 21:04:27.365364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:06.379 [2024-12-08 21:04:27.365373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:06.379 [2024-12-08 21:04:27.365381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:06.379 [2024-12-08 21:04:27.365390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:06.379 [2024-12-08 21:04:27.365398] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:06.379 [2024-12-08 21:04:27.365406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:06.379 [2024-12-08 21:04:27.365415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:06.379 [2024-12-08 21:04:27.365424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:06.379 [2024-12-08 21:04:27.365432] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:06.380 [2024-12-08 21:04:27.365440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:06.380 [2024-12-08 21:04:27.365449] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:06.380 [2024-12-08 21:04:27.365457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:06.380 [2024-12-08 21:04:27.365465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:06.380 [2024-12-08 21:04:27.365474] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:06.380 [2024-12-08 21:04:27.365482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:06.380 [2024-12-08 21:04:27.365506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.380 [2024-12-08 21:04:27.365514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:06.380 [2024-12-08 21:04:27.365522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:06.380 [2024-12-08 21:04:27.365530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.380 [2024-12-08 21:04:27.365538] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:06.380 [2024-12-08 21:04:27.365547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:06.380 [2024-12-08 21:04:27.365556] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.380 [2024-12-08 21:04:27.365570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.380 [2024-12-08 21:04:27.365578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:06.380 [2024-12-08 21:04:27.365587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:06.380 [2024-12-08 21:04:27.365595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:06.380 [2024-12-08 21:04:27.365604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:06.380 [2024-12-08 21:04:27.365612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:06.380 [2024-12-08 21:04:27.365621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:06.380 [2024-12-08 21:04:27.365630] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:06.380 [2024-12-08 21:04:27.365641] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.380 [2024-12-08 21:04:27.365651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:06.380 [2024-12-08 21:04:27.365660] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:06.380 [2024-12-08 21:04:27.365668] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:06.380 [2024-12-08 21:04:27.365678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:06.380 [2024-12-08 21:04:27.365687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:06.380 [2024-12-08 21:04:27.365695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:06.380 [2024-12-08 21:04:27.365704] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:06.380 [2024-12-08 21:04:27.365713] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:06.380 [2024-12-08 21:04:27.365722] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:06.380 [2024-12-08 21:04:27.365731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:06.380 [2024-12-08 21:04:27.365740] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:06.380 [2024-12-08 21:04:27.365749] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:06.380 [2024-12-08 21:04:27.365759] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:06.380 [2024-12-08 21:04:27.365767] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:06.380 [2024-12-08 21:04:27.365782] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.380 [2024-12-08 21:04:27.365792] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:06.380 [2024-12-08 21:04:27.365801] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:06.380 [2024-12-08 21:04:27.365810] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:06.380 [2024-12-08 21:04:27.365820] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:06.380 [2024-12-08 21:04:27.365830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.380 [2024-12-08 21:04:27.365840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:06.380 [2024-12-08 21:04:27.365849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:18:06.380 [2024-12-08 21:04:27.365858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.380 [2024-12-08 21:04:27.380944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.380 [2024-12-08 21:04:27.381098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.380 [2024-12-08 21:04:27.381241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.038 ms 00:18:06.380 [2024-12-08 21:04:27.381286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.380 [2024-12-08 21:04:27.381502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.380 [2024-12-08 21:04:27.381550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:06.380 [2024-12-08 21:04:27.381724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:06.380 [2024-12-08 21:04:27.381850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.640 [2024-12-08 21:04:27.429521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.640 [2024-12-08 21:04:27.429694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.640 [2024-12-08 21:04:27.429839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.603 ms 00:18:06.640 [2024-12-08 21:04:27.429885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.640 [2024-12-08 21:04:27.430051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.640 [2024-12-08 21:04:27.430159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.640 [2024-12-08 21:04:27.430348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:06.640 [2024-12-08 21:04:27.430395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.640 [2024-12-08 21:04:27.430747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.640 [2024-12-08 21:04:27.430854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.640 [2024-12-08 21:04:27.430946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:06.640 [2024-12-08 21:04:27.431045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.640 [2024-12-08 21:04:27.431232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.640 [2024-12-08 21:04:27.431288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.640 [2024-12-08 21:04:27.431374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:06.640 [2024-12-08 21:04:27.431415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.640 [2024-12-08 21:04:27.445720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.640 [2024-12-08 21:04:27.445861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.640 [2024-12-08 21:04:27.445960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.243 ms 00:18:06.640 [2024-12-08 21:04:27.446011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.640 [2024-12-08 21:04:27.458795] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:06.640 [2024-12-08 21:04:27.458961] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:06.640 [2024-12-08 21:04:27.459141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.640 [2024-12-08 21:04:27.459245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:06.640 [2024-12-08 21:04:27.459292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.953 ms 00:18:06.640 [2024-12-08 21:04:27.459376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.640 [2024-12-08 21:04:27.482812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.640 [2024-12-08 21:04:27.482960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:06.640 [2024-12-08 21:04:27.483064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.305 ms 00:18:06.640 [2024-12-08 21:04:27.483124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.640 [2024-12-08 21:04:27.495526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.640 [2024-12-08 21:04:27.495731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:06.640 [2024-12-08 21:04:27.495859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.295 ms 00:18:06.640 [2024-12-08 21:04:27.495918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.640 [2024-12-08 21:04:27.508463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.640 [2024-12-08 21:04:27.508657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:06.640 [2024-12-08 21:04:27.508751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.440 ms 00:18:06.640 [2024-12-08 21:04:27.508793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.640 [2024-12-08 21:04:27.509226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.640 [2024-12-08 21:04:27.509367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:06.640 [2024-12-08 21:04:27.509457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:06.641 [2024-12-08 21:04:27.509507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.641 [2024-12-08 21:04:27.569183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.641 [2024-12-08 21:04:27.569423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:06.641 [2024-12-08 21:04:27.569450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.536 ms 00:18:06.641 [2024-12-08 21:04:27.569469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.641 [2024-12-08 21:04:27.579726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:06.641 [2024-12-08 21:04:27.591019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.641 [2024-12-08 21:04:27.591299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:06.641 [2024-12-08 21:04:27.591328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.399 ms 00:18:06.641 [2024-12-08 21:04:27.591339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.641 [2024-12-08 21:04:27.591471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.641 [2024-12-08 21:04:27.591489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:06.641 [2024-12-08 21:04:27.591506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:06.641 [2024-12-08 21:04:27.591530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.641 [2024-12-08 21:04:27.591590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.641 [2024-12-08 21:04:27.591604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:06.641 [2024-12-08 21:04:27.591615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:06.641 [2024-12-08 21:04:27.591625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.641 [2024-12-08 21:04:27.593456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.641 [2024-12-08 21:04:27.593488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:06.641 [2024-12-08 21:04:27.593517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.795 ms 00:18:06.641 [2024-12-08 21:04:27.593526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.641 [2024-12-08 21:04:27.593561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.641 [2024-12-08 21:04:27.593593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:06.641 [2024-12-08 21:04:27.593603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:06.641 [2024-12-08 21:04:27.593612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.641 [2024-12-08 21:04:27.593646] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:06.641 [2024-12-08 21:04:27.593659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.641 [2024-12-08 21:04:27.593668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:06.641 [2024-12-08 21:04:27.593677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:06.641 [2024-12-08 21:04:27.593686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.641 [2024-12-08 21:04:27.618045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.641 [2024-12-08 21:04:27.618089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:06.641 [2024-12-08 21:04:27.618120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.334 ms 00:18:06.641 [2024-12-08 21:04:27.618131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.641 [2024-12-08 21:04:27.618242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.641 [2024-12-08 21:04:27.618260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:06.641 [2024-12-08 21:04:27.618272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:06.641 [2024-12-08 21:04:27.618282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.641 [2024-12-08 21:04:27.619607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:06.641 [2024-12-08 21:04:27.623130] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.774 ms, result 0 00:18:06.641 [2024-12-08 21:04:27.623957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:06.641 [2024-12-08 21:04:27.638195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:08.021  [2024-12-08T21:04:30.002Z] Copying: 24/256 [MB] (24 MBps) [2024-12-08T21:04:30.941Z] Copying: 46/256 [MB] (21 MBps) [2024-12-08T21:04:31.892Z] Copying: 68/256 [MB] (22 MBps) [2024-12-08T21:04:32.831Z] Copying: 90/256 [MB] (21 MBps) [2024-12-08T21:04:33.766Z] Copying: 112/256 [MB] (22 MBps) [2024-12-08T21:04:34.701Z] Copying: 133/256 [MB] (21 MBps) [2024-12-08T21:04:36.091Z] Copying: 155/256 [MB] (21 MBps) [2024-12-08T21:04:37.028Z] Copying: 177/256 [MB] (21 MBps) [2024-12-08T21:04:38.057Z] Copying: 198/256 [MB] (21 MBps) [2024-12-08T21:04:38.706Z] Copying: 219/256 [MB] (21 MBps) [2024-12-08T21:04:39.644Z] Copying: 241/256 [MB] (21 MBps) [2024-12-08T21:04:39.644Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-08 21:04:39.376348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:18.601 [2024-12-08 21:04:39.386812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.386858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:18.601 [2024-12-08 21:04:39.386891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:18.601 [2024-12-08 21:04:39.386901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.386928] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:18.601 [2024-12-08 21:04:39.390510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.390539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:18.601 [2024-12-08 21:04:39.390567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.564 ms 00:18:18.601 [2024-12-08 21:04:39.390577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.390883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.390906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:18.601 [2024-12-08 21:04:39.390918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:18:18.601 [2024-12-08 21:04:39.390941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.394175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.394200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:18.601 [2024-12-08 21:04:39.394227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:18:18.601 [2024-12-08 21:04:39.394238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.400440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.400648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:18.601 [2024-12-08 21:04:39.400671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.182 ms 00:18:18.601 [2024-12-08 21:04:39.400682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.424998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.425034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:18.601 [2024-12-08 21:04:39.425048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.237 ms 00:18:18.601 [2024-12-08 21:04:39.425057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.439783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.439933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:18.601 [2024-12-08 21:04:39.439958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.627 ms 00:18:18.601 [2024-12-08 21:04:39.439970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.440164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.440185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:18.601 [2024-12-08 21:04:39.440197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:18:18.601 [2024-12-08 21:04:39.440206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.464933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.464969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:18.601 [2024-12-08 21:04:39.464983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.681 ms 00:18:18.601 [2024-12-08 21:04:39.464992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.489475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.489510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:18.601 [2024-12-08 21:04:39.489524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.432 ms 00:18:18.601 [2024-12-08 21:04:39.489533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.513836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.513871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:18.601 [2024-12-08 21:04:39.513885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.250 ms 00:18:18.601 [2024-12-08 21:04:39.513894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.538228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.601 [2024-12-08 21:04:39.538263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:18.601 [2024-12-08 21:04:39.538292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.256 ms 00:18:18.601 [2024-12-08 21:04:39.538302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.601 [2024-12-08 21:04:39.538357] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:18.601 [2024-12-08 21:04:39.538379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:18.601 [2024-12-08 21:04:39.538852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.538998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:18.602 [2024-12-08 21:04:39.539415] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:18.602 [2024-12-08 21:04:39.539425] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4895114b-fe81-4086-8292-4d3881a52f6a 00:18:18.602 [2024-12-08 21:04:39.539435] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:18.602 [2024-12-08 21:04:39.539444] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:18.602 [2024-12-08 21:04:39.539452] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:18.602 [2024-12-08 21:04:39.539462] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:18.602 [2024-12-08 21:04:39.539471] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:18.602 [2024-12-08 21:04:39.539485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:18.602 [2024-12-08 21:04:39.539494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:18.602 [2024-12-08 21:04:39.539518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:18.602 [2024-12-08 21:04:39.539542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:18.602 [2024-12-08 21:04:39.539567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.602 [2024-12-08 21:04:39.539577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:18.602 [2024-12-08 21:04:39.539587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:18:18.602 [2024-12-08 21:04:39.539597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.602 [2024-12-08 21:04:39.553350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.602 [2024-12-08 21:04:39.553396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:18.602 [2024-12-08 21:04:39.553415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.714 ms 00:18:18.602 [2024-12-08 21:04:39.553425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.602 [2024-12-08 21:04:39.553634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.602 [2024-12-08 21:04:39.553653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:18.602 [2024-12-08 21:04:39.553664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:18:18.602 [2024-12-08 21:04:39.553689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.602 [2024-12-08 21:04:39.591810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.602 [2024-12-08 21:04:39.591847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.602 [2024-12-08 21:04:39.591865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.602 [2024-12-08 21:04:39.591875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.602 [2024-12-08 21:04:39.591955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.602 [2024-12-08 21:04:39.591970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.602 [2024-12-08 21:04:39.591980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.602 [2024-12-08 21:04:39.591989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.602 [2024-12-08 21:04:39.592040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.602 [2024-12-08 21:04:39.592056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.602 [2024-12-08 21:04:39.592066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.602 [2024-12-08 21:04:39.592143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.602 [2024-12-08 21:04:39.592167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.602 [2024-12-08 21:04:39.592179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.602 [2024-12-08 21:04:39.592189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.602 [2024-12-08 21:04:39.592198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.861 [2024-12-08 21:04:39.669017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.861 [2024-12-08 21:04:39.669083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.861 [2024-12-08 21:04:39.669120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.861 [2024-12-08 21:04:39.669130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.861 [2024-12-08 21:04:39.699005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.861 [2024-12-08 21:04:39.699039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.861 [2024-12-08 21:04:39.699053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.861 [2024-12-08 21:04:39.699062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.861 [2024-12-08 21:04:39.699166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.861 [2024-12-08 21:04:39.699183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:18.861 [2024-12-08 21:04:39.699194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.861 [2024-12-08 21:04:39.699220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.861 [2024-12-08 21:04:39.699258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.861 [2024-12-08 21:04:39.699271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:18.861 [2024-12-08 21:04:39.699281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.861 [2024-12-08 21:04:39.699290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.861 [2024-12-08 21:04:39.699400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.861 [2024-12-08 21:04:39.699418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:18.861 [2024-12-08 21:04:39.699429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.861 [2024-12-08 21:04:39.699454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.861 [2024-12-08 21:04:39.699550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.861 [2024-12-08 21:04:39.699566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:18.861 [2024-12-08 21:04:39.699576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.861 [2024-12-08 21:04:39.699586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.861 [2024-12-08 21:04:39.699628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.861 [2024-12-08 21:04:39.699640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:18.861 [2024-12-08 21:04:39.699651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.861 [2024-12-08 21:04:39.699660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.861 [2024-12-08 21:04:39.699713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.861 [2024-12-08 21:04:39.699730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:18.861 [2024-12-08 21:04:39.699740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.861 [2024-12-08 21:04:39.699749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.861 [2024-12-08 21:04:39.699898] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 313.091 ms, result 0 00:18:19.796 00:18:19.796 00:18:19.796 21:04:40 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:20.055 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:20.055 21:04:41 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:20.055 21:04:41 -- ftl/trim.sh@109 -- # fio_kill 00:18:20.055 21:04:41 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:20.055 21:04:41 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:20.055 21:04:41 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:20.313 21:04:41 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:20.313 21:04:41 -- ftl/trim.sh@20 -- # killprocess 73127 00:18:20.313 21:04:41 -- common/autotest_common.sh@936 -- # '[' -z 73127 ']' 00:18:20.313 21:04:41 -- common/autotest_common.sh@940 -- # kill -0 73127 00:18:20.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (73127) - No such process 00:18:20.313 Process with pid 73127 is not found 00:18:20.313 21:04:41 -- common/autotest_common.sh@963 -- # echo 'Process with pid 73127 is not found' 00:18:20.313 ************************************ 00:18:20.313 END TEST ftl_trim 00:18:20.313 ************************************ 00:18:20.313 00:18:20.313 real 1m7.614s 00:18:20.313 user 1m27.516s 00:18:20.313 sys 0m10.752s 00:18:20.313 21:04:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:20.314 21:04:41 -- common/autotest_common.sh@10 -- # set +x 00:18:20.314 21:04:41 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:18:20.314 21:04:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:20.314 21:04:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.314 21:04:41 -- common/autotest_common.sh@10 -- # set +x 00:18:20.314 ************************************ 00:18:20.314 START TEST ftl_restore 00:18:20.314 ************************************ 00:18:20.314 21:04:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:18:20.314 * Looking for test storage... 00:18:20.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.314 21:04:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:20.314 21:04:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:20.314 21:04:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:20.572 21:04:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:20.572 21:04:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:20.572 21:04:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:20.572 21:04:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:20.572 21:04:41 -- scripts/common.sh@335 -- # IFS=.-: 00:18:20.572 21:04:41 -- scripts/common.sh@335 -- # read -ra ver1 00:18:20.572 21:04:41 -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.572 21:04:41 -- scripts/common.sh@336 -- # read -ra ver2 00:18:20.572 21:04:41 -- scripts/common.sh@337 -- # local 'op=<' 00:18:20.572 21:04:41 -- scripts/common.sh@339 -- # ver1_l=2 00:18:20.572 21:04:41 -- scripts/common.sh@340 -- # ver2_l=1 00:18:20.572 21:04:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:20.572 21:04:41 -- scripts/common.sh@343 -- # case "$op" in 00:18:20.572 21:04:41 -- scripts/common.sh@344 -- # : 1 00:18:20.572 21:04:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:20.572 21:04:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.572 21:04:41 -- scripts/common.sh@364 -- # decimal 1 00:18:20.572 21:04:41 -- scripts/common.sh@352 -- # local d=1 00:18:20.572 21:04:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.572 21:04:41 -- scripts/common.sh@354 -- # echo 1 00:18:20.572 21:04:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:20.572 21:04:41 -- scripts/common.sh@365 -- # decimal 2 00:18:20.572 21:04:41 -- scripts/common.sh@352 -- # local d=2 00:18:20.572 21:04:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.572 21:04:41 -- scripts/common.sh@354 -- # echo 2 00:18:20.572 21:04:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:20.572 21:04:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:20.572 21:04:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:20.572 21:04:41 -- scripts/common.sh@367 -- # return 0 00:18:20.572 21:04:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.572 21:04:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:20.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.572 --rc genhtml_branch_coverage=1 00:18:20.572 --rc genhtml_function_coverage=1 00:18:20.572 --rc genhtml_legend=1 00:18:20.572 --rc geninfo_all_blocks=1 00:18:20.573 --rc geninfo_unexecuted_blocks=1 00:18:20.573 00:18:20.573 ' 00:18:20.573 21:04:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.573 --rc genhtml_branch_coverage=1 00:18:20.573 --rc genhtml_function_coverage=1 00:18:20.573 --rc genhtml_legend=1 00:18:20.573 --rc geninfo_all_blocks=1 00:18:20.573 --rc geninfo_unexecuted_blocks=1 00:18:20.573 00:18:20.573 ' 00:18:20.573 21:04:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.573 --rc genhtml_branch_coverage=1 00:18:20.573 --rc genhtml_function_coverage=1 00:18:20.573 --rc genhtml_legend=1 00:18:20.573 --rc geninfo_all_blocks=1 00:18:20.573 --rc geninfo_unexecuted_blocks=1 00:18:20.573 00:18:20.573 ' 00:18:20.573 21:04:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.573 --rc genhtml_branch_coverage=1 00:18:20.573 --rc genhtml_function_coverage=1 00:18:20.573 --rc genhtml_legend=1 00:18:20.573 --rc geninfo_all_blocks=1 00:18:20.573 --rc geninfo_unexecuted_blocks=1 00:18:20.573 00:18:20.573 ' 00:18:20.573 21:04:41 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:20.573 21:04:41 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:20.573 21:04:41 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.573 21:04:41 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.573 21:04:41 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:20.573 21:04:41 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:20.573 21:04:41 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.573 21:04:41 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:20.573 21:04:41 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:20.573 21:04:41 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.573 21:04:41 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.573 21:04:41 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:20.573 21:04:41 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:20.573 21:04:41 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:20.573 21:04:41 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:20.573 21:04:41 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:20.573 21:04:41 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:20.573 21:04:41 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.573 21:04:41 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.573 21:04:41 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:20.573 21:04:41 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:20.573 21:04:41 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:20.573 21:04:41 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:20.573 21:04:41 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:20.573 21:04:41 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:20.573 21:04:41 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:20.573 21:04:41 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:20.573 21:04:41 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:20.573 21:04:41 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:20.573 21:04:41 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.573 21:04:41 -- ftl/restore.sh@13 -- # mktemp -d 00:18:20.573 21:04:41 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.OmR8qE3KiE 00:18:20.573 21:04:41 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:20.573 21:04:41 -- ftl/restore.sh@16 -- # case $opt in 00:18:20.573 21:04:41 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:18:20.573 21:04:41 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:20.573 21:04:41 -- ftl/restore.sh@23 -- # shift 2 00:18:20.573 21:04:41 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:18:20.573 21:04:41 -- ftl/restore.sh@25 -- # timeout=240 00:18:20.573 21:04:41 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:20.573 21:04:41 -- ftl/restore.sh@39 -- # svcpid=73400 00:18:20.573 21:04:41 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.573 21:04:41 -- ftl/restore.sh@41 -- # waitforlisten 73400 00:18:20.573 21:04:41 -- common/autotest_common.sh@829 -- # '[' -z 73400 ']' 00:18:20.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.573 21:04:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.573 21:04:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.573 21:04:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.573 21:04:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.573 21:04:41 -- common/autotest_common.sh@10 -- # set +x 00:18:20.573 [2024-12-08 21:04:41.532635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:20.573 [2024-12-08 21:04:41.532790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73400 ] 00:18:20.831 [2024-12-08 21:04:41.702688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.831 [2024-12-08 21:04:41.845129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:20.831 [2024-12-08 21:04:41.845342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.769 21:04:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.769 21:04:42 -- common/autotest_common.sh@862 -- # return 0 00:18:21.769 21:04:42 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:21.769 21:04:42 -- ftl/common.sh@54 -- # local name=nvme0 00:18:21.769 21:04:42 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:21.769 21:04:42 -- ftl/common.sh@56 -- # local size=103424 00:18:21.769 21:04:42 -- ftl/common.sh@59 -- # local base_bdev 00:18:21.769 21:04:42 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:21.769 21:04:42 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:21.769 21:04:42 -- ftl/common.sh@62 -- # local base_size 00:18:21.769 21:04:42 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:21.769 21:04:42 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:18:21.769 21:04:42 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:21.769 21:04:42 -- common/autotest_common.sh@1369 -- # local bs 00:18:21.769 21:04:42 -- common/autotest_common.sh@1370 -- # local nb 00:18:21.769 21:04:42 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:22.028 21:04:43 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:22.028 { 00:18:22.028 "name": "nvme0n1", 00:18:22.028 "aliases": [ 00:18:22.028 "db66c9a3-4542-4440-beb0-6e87696bde72" 00:18:22.028 ], 00:18:22.028 "product_name": "NVMe disk", 00:18:22.028 "block_size": 4096, 00:18:22.028 "num_blocks": 1310720, 00:18:22.028 "uuid": "db66c9a3-4542-4440-beb0-6e87696bde72", 00:18:22.028 "assigned_rate_limits": { 00:18:22.028 "rw_ios_per_sec": 0, 00:18:22.028 "rw_mbytes_per_sec": 0, 00:18:22.028 "r_mbytes_per_sec": 0, 00:18:22.028 "w_mbytes_per_sec": 0 00:18:22.028 }, 00:18:22.028 "claimed": true, 00:18:22.028 "claim_type": "read_many_write_one", 00:18:22.028 "zoned": false, 00:18:22.028 "supported_io_types": { 00:18:22.028 "read": true, 00:18:22.028 "write": true, 00:18:22.028 "unmap": true, 00:18:22.028 "write_zeroes": true, 00:18:22.028 "flush": true, 00:18:22.028 "reset": true, 00:18:22.028 "compare": true, 00:18:22.028 "compare_and_write": false, 00:18:22.028 "abort": true, 00:18:22.028 "nvme_admin": true, 00:18:22.028 "nvme_io": true 00:18:22.028 }, 00:18:22.028 "driver_specific": { 00:18:22.028 "nvme": [ 00:18:22.028 { 00:18:22.028 "pci_address": "0000:00:07.0", 00:18:22.028 "trid": { 00:18:22.028 "trtype": "PCIe", 00:18:22.028 "traddr": "0000:00:07.0" 00:18:22.028 }, 00:18:22.028 "ctrlr_data": { 00:18:22.028 "cntlid": 0, 00:18:22.028 "vendor_id": "0x1b36", 00:18:22.028 "model_number": "QEMU NVMe Ctrl", 00:18:22.028 "serial_number": "12341", 00:18:22.028 "firmware_revision": "8.0.0", 00:18:22.028 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:22.028 "oacs": { 00:18:22.028 "security": 0, 00:18:22.028 "format": 1, 00:18:22.028 "firmware": 0, 00:18:22.028 "ns_manage": 1 00:18:22.028 }, 00:18:22.028 "multi_ctrlr": false, 00:18:22.029 "ana_reporting": false 00:18:22.029 }, 00:18:22.029 "vs": { 00:18:22.029 "nvme_version": "1.4" 00:18:22.029 }, 00:18:22.029 "ns_data": { 00:18:22.029 "id": 1, 00:18:22.029 "can_share": false 00:18:22.029 } 00:18:22.029 } 00:18:22.029 ], 00:18:22.029 "mp_policy": "active_passive" 00:18:22.029 } 00:18:22.029 } 00:18:22.029 ]' 00:18:22.029 21:04:43 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:22.287 21:04:43 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:22.287 21:04:43 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:22.287 21:04:43 -- common/autotest_common.sh@1373 -- # nb=1310720 00:18:22.287 21:04:43 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:18:22.287 21:04:43 -- common/autotest_common.sh@1377 -- # echo 5120 00:18:22.287 21:04:43 -- ftl/common.sh@63 -- # base_size=5120 00:18:22.287 21:04:43 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:22.287 21:04:43 -- ftl/common.sh@67 -- # clear_lvols 00:18:22.287 21:04:43 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:22.287 21:04:43 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:22.546 21:04:43 -- ftl/common.sh@28 -- # stores=95fdb38c-cd12-4fe2-9986-eba8b2821102 00:18:22.546 21:04:43 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:22.546 21:04:43 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95fdb38c-cd12-4fe2-9986-eba8b2821102 00:18:22.805 21:04:43 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:23.065 21:04:43 -- ftl/common.sh@68 -- # lvs=27b03bc3-ca96-47d2-a8bc-54dbb347c3a8 00:18:23.065 21:04:43 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 27b03bc3-ca96-47d2-a8bc-54dbb347c3a8 00:18:23.065 21:04:44 -- ftl/restore.sh@43 -- # split_bdev=fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:23.065 21:04:44 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:18:23.065 21:04:44 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:23.065 21:04:44 -- ftl/common.sh@35 -- # local name=nvc0 00:18:23.065 21:04:44 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:23.065 21:04:44 -- ftl/common.sh@37 -- # local base_bdev=fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:23.065 21:04:44 -- ftl/common.sh@38 -- # local cache_size= 00:18:23.065 21:04:44 -- ftl/common.sh@41 -- # get_bdev_size fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:23.065 21:04:44 -- common/autotest_common.sh@1367 -- # local bdev_name=fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:23.065 21:04:44 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:23.065 21:04:44 -- common/autotest_common.sh@1369 -- # local bs 00:18:23.065 21:04:44 -- common/autotest_common.sh@1370 -- # local nb 00:18:23.065 21:04:44 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:23.324 21:04:44 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:23.324 { 00:18:23.324 "name": "fdc3fbff-90f7-477f-ac30-d07b71c92bb0", 00:18:23.324 "aliases": [ 00:18:23.324 "lvs/nvme0n1p0" 00:18:23.324 ], 00:18:23.324 "product_name": "Logical Volume", 00:18:23.324 "block_size": 4096, 00:18:23.324 "num_blocks": 26476544, 00:18:23.324 "uuid": "fdc3fbff-90f7-477f-ac30-d07b71c92bb0", 00:18:23.324 "assigned_rate_limits": { 00:18:23.324 "rw_ios_per_sec": 0, 00:18:23.324 "rw_mbytes_per_sec": 0, 00:18:23.324 "r_mbytes_per_sec": 0, 00:18:23.324 "w_mbytes_per_sec": 0 00:18:23.324 }, 00:18:23.324 "claimed": false, 00:18:23.324 "zoned": false, 00:18:23.324 "supported_io_types": { 00:18:23.324 "read": true, 00:18:23.324 "write": true, 00:18:23.324 "unmap": true, 00:18:23.324 "write_zeroes": true, 00:18:23.324 "flush": false, 00:18:23.324 "reset": true, 00:18:23.324 "compare": false, 00:18:23.324 "compare_and_write": false, 00:18:23.324 "abort": false, 00:18:23.324 "nvme_admin": false, 00:18:23.324 "nvme_io": false 00:18:23.324 }, 00:18:23.324 "driver_specific": { 00:18:23.324 "lvol": { 00:18:23.324 "lvol_store_uuid": "27b03bc3-ca96-47d2-a8bc-54dbb347c3a8", 00:18:23.324 "base_bdev": "nvme0n1", 00:18:23.324 "thin_provision": true, 00:18:23.324 "snapshot": false, 00:18:23.324 "clone": false, 00:18:23.324 "esnap_clone": false 00:18:23.324 } 00:18:23.324 } 00:18:23.324 } 00:18:23.324 ]' 00:18:23.324 21:04:44 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:23.324 21:04:44 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:23.324 21:04:44 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:23.583 21:04:44 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:23.583 21:04:44 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:23.583 21:04:44 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:23.583 21:04:44 -- ftl/common.sh@41 -- # local base_size=5171 00:18:23.583 21:04:44 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:23.583 21:04:44 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:23.841 21:04:44 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:23.841 21:04:44 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:23.841 21:04:44 -- ftl/common.sh@48 -- # get_bdev_size fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:23.841 21:04:44 -- common/autotest_common.sh@1367 -- # local bdev_name=fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:23.841 21:04:44 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:23.841 21:04:44 -- common/autotest_common.sh@1369 -- # local bs 00:18:23.841 21:04:44 -- common/autotest_common.sh@1370 -- # local nb 00:18:23.841 21:04:44 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:24.100 21:04:44 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:24.100 { 00:18:24.100 "name": "fdc3fbff-90f7-477f-ac30-d07b71c92bb0", 00:18:24.100 "aliases": [ 00:18:24.100 "lvs/nvme0n1p0" 00:18:24.100 ], 00:18:24.100 "product_name": "Logical Volume", 00:18:24.100 "block_size": 4096, 00:18:24.100 "num_blocks": 26476544, 00:18:24.100 "uuid": "fdc3fbff-90f7-477f-ac30-d07b71c92bb0", 00:18:24.100 "assigned_rate_limits": { 00:18:24.100 "rw_ios_per_sec": 0, 00:18:24.100 "rw_mbytes_per_sec": 0, 00:18:24.100 "r_mbytes_per_sec": 0, 00:18:24.100 "w_mbytes_per_sec": 0 00:18:24.100 }, 00:18:24.100 "claimed": false, 00:18:24.100 "zoned": false, 00:18:24.100 "supported_io_types": { 00:18:24.100 "read": true, 00:18:24.100 "write": true, 00:18:24.100 "unmap": true, 00:18:24.100 "write_zeroes": true, 00:18:24.100 "flush": false, 00:18:24.100 "reset": true, 00:18:24.100 "compare": false, 00:18:24.100 "compare_and_write": false, 00:18:24.100 "abort": false, 00:18:24.100 "nvme_admin": false, 00:18:24.100 "nvme_io": false 00:18:24.100 }, 00:18:24.100 "driver_specific": { 00:18:24.100 "lvol": { 00:18:24.100 "lvol_store_uuid": "27b03bc3-ca96-47d2-a8bc-54dbb347c3a8", 00:18:24.100 "base_bdev": "nvme0n1", 00:18:24.100 "thin_provision": true, 00:18:24.100 "snapshot": false, 00:18:24.100 "clone": false, 00:18:24.100 "esnap_clone": false 00:18:24.100 } 00:18:24.100 } 00:18:24.100 } 00:18:24.100 ]' 00:18:24.100 21:04:44 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:24.100 21:04:44 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:24.100 21:04:44 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:24.100 21:04:45 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:24.100 21:04:45 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:24.100 21:04:45 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:24.100 21:04:45 -- ftl/common.sh@48 -- # cache_size=5171 00:18:24.100 21:04:45 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:24.358 21:04:45 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:24.358 21:04:45 -- ftl/restore.sh@48 -- # get_bdev_size fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:24.358 21:04:45 -- common/autotest_common.sh@1367 -- # local bdev_name=fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:24.358 21:04:45 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:24.358 21:04:45 -- common/autotest_common.sh@1369 -- # local bs 00:18:24.358 21:04:45 -- common/autotest_common.sh@1370 -- # local nb 00:18:24.358 21:04:45 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdc3fbff-90f7-477f-ac30-d07b71c92bb0 00:18:24.617 21:04:45 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:24.617 { 00:18:24.617 "name": "fdc3fbff-90f7-477f-ac30-d07b71c92bb0", 00:18:24.617 "aliases": [ 00:18:24.617 "lvs/nvme0n1p0" 00:18:24.617 ], 00:18:24.617 "product_name": "Logical Volume", 00:18:24.617 "block_size": 4096, 00:18:24.617 "num_blocks": 26476544, 00:18:24.617 "uuid": "fdc3fbff-90f7-477f-ac30-d07b71c92bb0", 00:18:24.617 "assigned_rate_limits": { 00:18:24.617 "rw_ios_per_sec": 0, 00:18:24.617 "rw_mbytes_per_sec": 0, 00:18:24.617 "r_mbytes_per_sec": 0, 00:18:24.617 "w_mbytes_per_sec": 0 00:18:24.617 }, 00:18:24.617 "claimed": false, 00:18:24.617 "zoned": false, 00:18:24.617 "supported_io_types": { 00:18:24.617 "read": true, 00:18:24.617 "write": true, 00:18:24.617 "unmap": true, 00:18:24.617 "write_zeroes": true, 00:18:24.617 "flush": false, 00:18:24.617 "reset": true, 00:18:24.617 "compare": false, 00:18:24.617 "compare_and_write": false, 00:18:24.617 "abort": false, 00:18:24.617 "nvme_admin": false, 00:18:24.617 "nvme_io": false 00:18:24.617 }, 00:18:24.617 "driver_specific": { 00:18:24.617 "lvol": { 00:18:24.617 "lvol_store_uuid": "27b03bc3-ca96-47d2-a8bc-54dbb347c3a8", 00:18:24.617 "base_bdev": "nvme0n1", 00:18:24.617 "thin_provision": true, 00:18:24.617 "snapshot": false, 00:18:24.617 "clone": false, 00:18:24.617 "esnap_clone": false 00:18:24.617 } 00:18:24.617 } 00:18:24.617 } 00:18:24.617 ]' 00:18:24.617 21:04:45 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:24.617 21:04:45 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:24.617 21:04:45 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:24.617 21:04:45 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:24.617 21:04:45 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:24.617 21:04:45 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:24.617 21:04:45 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:24.617 21:04:45 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fdc3fbff-90f7-477f-ac30-d07b71c92bb0 --l2p_dram_limit 10' 00:18:24.617 21:04:45 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:24.617 21:04:45 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:18:24.617 21:04:45 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:24.617 21:04:45 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:24.617 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:24.617 21:04:45 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fdc3fbff-90f7-477f-ac30-d07b71c92bb0 --l2p_dram_limit 10 -c nvc0n1p0 00:18:24.877 [2024-12-08 21:04:45.736619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.877 [2024-12-08 21:04:45.736812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:24.877 [2024-12-08 21:04:45.736846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:24.877 [2024-12-08 21:04:45.736862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.877 [2024-12-08 21:04:45.736936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.877 [2024-12-08 21:04:45.736952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.877 [2024-12-08 21:04:45.736965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:24.877 [2024-12-08 21:04:45.736975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.877 [2024-12-08 21:04:45.737005] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:24.877 [2024-12-08 21:04:45.737892] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:24.877 [2024-12-08 21:04:45.737935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.877 [2024-12-08 21:04:45.737947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.877 [2024-12-08 21:04:45.737960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:18:24.877 [2024-12-08 21:04:45.737969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.877 [2024-12-08 21:04:45.738159] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 97a1d5b0-5381-44ba-bfc8-56f53a684fe6 00:18:24.877 [2024-12-08 21:04:45.738985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.877 [2024-12-08 21:04:45.739014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:24.877 [2024-12-08 21:04:45.739028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:24.877 [2024-12-08 21:04:45.739039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.877 [2024-12-08 21:04:45.742849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.877 [2024-12-08 21:04:45.742891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.877 [2024-12-08 21:04:45.742905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.752 ms 00:18:24.877 [2024-12-08 21:04:45.742917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.877 [2024-12-08 21:04:45.743015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.877 [2024-12-08 21:04:45.743033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.877 [2024-12-08 21:04:45.743045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:24.877 [2024-12-08 21:04:45.743059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.877 [2024-12-08 21:04:45.743125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.877 [2024-12-08 21:04:45.743147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:24.877 [2024-12-08 21:04:45.743158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:24.877 [2024-12-08 21:04:45.743169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.877 [2024-12-08 21:04:45.743200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:24.877 [2024-12-08 21:04:45.746909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.877 [2024-12-08 21:04:45.746941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.877 [2024-12-08 21:04:45.746958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.716 ms 00:18:24.877 [2024-12-08 21:04:45.746967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.877 [2024-12-08 21:04:45.747007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.878 [2024-12-08 21:04:45.747019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:24.878 [2024-12-08 21:04:45.747031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:24.878 [2024-12-08 21:04:45.747040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.878 [2024-12-08 21:04:45.747125] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:24.878 [2024-12-08 21:04:45.747260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:24.878 [2024-12-08 21:04:45.747281] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:24.878 [2024-12-08 21:04:45.747295] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:24.878 [2024-12-08 21:04:45.747310] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:24.878 [2024-12-08 21:04:45.747322] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:24.878 [2024-12-08 21:04:45.747336] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:24.878 [2024-12-08 21:04:45.747358] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:24.878 [2024-12-08 21:04:45.747370] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:24.878 [2024-12-08 21:04:45.747380] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:24.878 [2024-12-08 21:04:45.747392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.878 [2024-12-08 21:04:45.747418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:24.878 [2024-12-08 21:04:45.747446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:24.878 [2024-12-08 21:04:45.747471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.878 [2024-12-08 21:04:45.747549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.878 [2024-12-08 21:04:45.747562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:24.878 [2024-12-08 21:04:45.747575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:24.878 [2024-12-08 21:04:45.747586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.878 [2024-12-08 21:04:45.747662] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:24.878 [2024-12-08 21:04:45.747682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:24.878 [2024-12-08 21:04:45.747696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.878 [2024-12-08 21:04:45.747707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.878 [2024-12-08 21:04:45.747719] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:24.878 [2024-12-08 21:04:45.747728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:24.878 [2024-12-08 21:04:45.747739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:24.878 [2024-12-08 21:04:45.747748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:24.878 [2024-12-08 21:04:45.747759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:24.878 [2024-12-08 21:04:45.747768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.878 [2024-12-08 21:04:45.747779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:24.878 [2024-12-08 21:04:45.747789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:24.878 [2024-12-08 21:04:45.747802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.878 [2024-12-08 21:04:45.747811] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:24.878 [2024-12-08 21:04:45.747822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:24.878 [2024-12-08 21:04:45.747831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.878 [2024-12-08 21:04:45.747844] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:24.878 [2024-12-08 21:04:45.747853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:24.878 [2024-12-08 21:04:45.747864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.878 [2024-12-08 21:04:45.747873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:24.878 [2024-12-08 21:04:45.747884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:24.878 [2024-12-08 21:04:45.747893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:24.878 [2024-12-08 21:04:45.747904] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:24.878 [2024-12-08 21:04:45.747913] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:24.878 [2024-12-08 21:04:45.747924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:24.878 [2024-12-08 21:04:45.747933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:24.878 [2024-12-08 21:04:45.747944] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:24.878 [2024-12-08 21:04:45.747953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:24.878 [2024-12-08 21:04:45.747963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:24.878 [2024-12-08 21:04:45.747973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:24.878 [2024-12-08 21:04:45.747983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:24.878 [2024-12-08 21:04:45.747992] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:24.878 [2024-12-08 21:04:45.748005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:24.878 [2024-12-08 21:04:45.748014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:24.878 [2024-12-08 21:04:45.748025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:24.878 [2024-12-08 21:04:45.748034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:24.878 [2024-12-08 21:04:45.748045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.878 [2024-12-08 21:04:45.748054] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:24.878 [2024-12-08 21:04:45.748155] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:24.878 [2024-12-08 21:04:45.748169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.878 [2024-12-08 21:04:45.748181] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:24.878 [2024-12-08 21:04:45.748191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:24.878 [2024-12-08 21:04:45.748212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.878 [2024-12-08 21:04:45.748243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.878 [2024-12-08 21:04:45.748259] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:24.878 [2024-12-08 21:04:45.748270] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:24.878 [2024-12-08 21:04:45.748281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:24.878 [2024-12-08 21:04:45.748291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:24.878 [2024-12-08 21:04:45.748305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:24.878 [2024-12-08 21:04:45.748315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:24.878 [2024-12-08 21:04:45.748328] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:24.878 [2024-12-08 21:04:45.748342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.878 [2024-12-08 21:04:45.748356] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:24.878 [2024-12-08 21:04:45.748367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:24.878 [2024-12-08 21:04:45.748379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:24.878 [2024-12-08 21:04:45.748391] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:24.878 [2024-12-08 21:04:45.748403] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:24.878 [2024-12-08 21:04:45.748414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:24.878 [2024-12-08 21:04:45.748426] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:24.878 [2024-12-08 21:04:45.748437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:24.878 [2024-12-08 21:04:45.748450] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:24.878 [2024-12-08 21:04:45.748461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:24.878 [2024-12-08 21:04:45.748473] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:24.878 [2024-12-08 21:04:45.748499] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:24.878 [2024-12-08 21:04:45.748515] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:24.878 [2024-12-08 21:04:45.748526] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:24.878 [2024-12-08 21:04:45.748539] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.878 [2024-12-08 21:04:45.748551] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:24.878 [2024-12-08 21:04:45.748563] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:24.878 [2024-12-08 21:04:45.748588] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:24.878 [2024-12-08 21:04:45.748600] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:24.878 [2024-12-08 21:04:45.748611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.878 [2024-12-08 21:04:45.748623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:24.878 [2024-12-08 21:04:45.748634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:18:24.878 [2024-12-08 21:04:45.748645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.878 [2024-12-08 21:04:45.764071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.878 [2024-12-08 21:04:45.764126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.878 [2024-12-08 21:04:45.764162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.365 ms 00:18:24.879 [2024-12-08 21:04:45.764175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.879 [2024-12-08 21:04:45.764310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.879 [2024-12-08 21:04:45.764331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:24.879 [2024-12-08 21:04:45.764345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:24.879 [2024-12-08 21:04:45.764357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.879 [2024-12-08 21:04:45.794517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.879 [2024-12-08 21:04:45.794579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:24.879 [2024-12-08 21:04:45.794596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.101 ms 00:18:24.879 [2024-12-08 21:04:45.794608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.879 [2024-12-08 21:04:45.794649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.879 [2024-12-08 21:04:45.794664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:24.879 [2024-12-08 21:04:45.794674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:24.879 [2024-12-08 21:04:45.794687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.879 [2024-12-08 21:04:45.795024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.879 [2024-12-08 21:04:45.795043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:24.879 [2024-12-08 21:04:45.795054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:18:24.879 [2024-12-08 21:04:45.795065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.879 [2024-12-08 21:04:45.795249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.879 [2024-12-08 21:04:45.795272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:24.879 [2024-12-08 21:04:45.795284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:24.879 [2024-12-08 21:04:45.795298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.879 [2024-12-08 21:04:45.809611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.879 [2024-12-08 21:04:45.809650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:24.879 [2024-12-08 21:04:45.809665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.289 ms 00:18:24.879 [2024-12-08 21:04:45.809676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.879 [2024-12-08 21:04:45.820258] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:24.879 [2024-12-08 21:04:45.822700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.879 [2024-12-08 21:04:45.822730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:24.879 [2024-12-08 21:04:45.822747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.943 ms 00:18:24.879 [2024-12-08 21:04:45.822757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.879 [2024-12-08 21:04:45.893672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.879 [2024-12-08 21:04:45.893736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:24.879 [2024-12-08 21:04:45.893756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.881 ms 00:18:24.879 [2024-12-08 21:04:45.893766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.879 [2024-12-08 21:04:45.893824] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:24.879 [2024-12-08 21:04:45.893841] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:27.412 [2024-12-08 21:04:48.443389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.412 [2024-12-08 21:04:48.443446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:27.412 [2024-12-08 21:04:48.443483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2549.585 ms 00:18:27.412 [2024-12-08 21:04:48.443494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.412 [2024-12-08 21:04:48.443690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.412 [2024-12-08 21:04:48.443707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:27.412 [2024-12-08 21:04:48.443723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:18:27.412 [2024-12-08 21:04:48.443733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.670 [2024-12-08 21:04:48.470560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.670 [2024-12-08 21:04:48.470597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:27.670 [2024-12-08 21:04:48.470630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.769 ms 00:18:27.670 [2024-12-08 21:04:48.470641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.670 [2024-12-08 21:04:48.496224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.670 [2024-12-08 21:04:48.496292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:27.670 [2024-12-08 21:04:48.496317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.511 ms 00:18:27.670 [2024-12-08 21:04:48.496328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.670 [2024-12-08 21:04:48.496689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.670 [2024-12-08 21:04:48.496710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:27.670 [2024-12-08 21:04:48.496724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:18:27.670 [2024-12-08 21:04:48.496735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.670 [2024-12-08 21:04:48.562972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.670 [2024-12-08 21:04:48.563011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:27.670 [2024-12-08 21:04:48.563030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.190 ms 00:18:27.670 [2024-12-08 21:04:48.563040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.670 [2024-12-08 21:04:48.588861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.670 [2024-12-08 21:04:48.589019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:27.670 [2024-12-08 21:04:48.589049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.748 ms 00:18:27.670 [2024-12-08 21:04:48.589062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.670 [2024-12-08 21:04:48.590825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.670 [2024-12-08 21:04:48.590861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:27.670 [2024-12-08 21:04:48.590880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:18:27.671 [2024-12-08 21:04:48.590889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.671 [2024-12-08 21:04:48.616095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.671 [2024-12-08 21:04:48.616131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:27.671 [2024-12-08 21:04:48.616148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.148 ms 00:18:27.671 [2024-12-08 21:04:48.616158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.671 [2024-12-08 21:04:48.616251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.671 [2024-12-08 21:04:48.616270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:27.671 [2024-12-08 21:04:48.616284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:27.671 [2024-12-08 21:04:48.616294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.671 [2024-12-08 21:04:48.616390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.671 [2024-12-08 21:04:48.616407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:27.671 [2024-12-08 21:04:48.616420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:27.671 [2024-12-08 21:04:48.616430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.671 [2024-12-08 21:04:48.617666] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2880.427 ms, result 0 00:18:27.671 { 00:18:27.671 "name": "ftl0", 00:18:27.671 "uuid": "97a1d5b0-5381-44ba-bfc8-56f53a684fe6" 00:18:27.671 } 00:18:27.671 21:04:48 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:27.671 21:04:48 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:27.929 21:04:48 -- ftl/restore.sh@63 -- # echo ']}' 00:18:27.929 21:04:48 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:28.188 [2024-12-08 21:04:49.160954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.188 [2024-12-08 21:04:49.161174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:28.188 [2024-12-08 21:04:49.161203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:28.188 [2024-12-08 21:04:49.161218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.188 [2024-12-08 21:04:49.161260] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:28.188 [2024-12-08 21:04:49.163981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.188 [2024-12-08 21:04:49.164008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:28.188 [2024-12-08 21:04:49.164024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.694 ms 00:18:28.188 [2024-12-08 21:04:49.164043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.188 [2024-12-08 21:04:49.164389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.188 [2024-12-08 21:04:49.164408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:28.188 [2024-12-08 21:04:49.164436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:28.188 [2024-12-08 21:04:49.164463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.188 [2024-12-08 21:04:49.167209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.188 [2024-12-08 21:04:49.167236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:28.188 [2024-12-08 21:04:49.167267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.723 ms 00:18:28.188 [2024-12-08 21:04:49.167278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.188 [2024-12-08 21:04:49.172554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.188 [2024-12-08 21:04:49.172583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:28.188 [2024-12-08 21:04:49.172598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.250 ms 00:18:28.188 [2024-12-08 21:04:49.172607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.188 [2024-12-08 21:04:49.196985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.189 [2024-12-08 21:04:49.197021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:28.189 [2024-12-08 21:04:49.197039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.289 ms 00:18:28.189 [2024-12-08 21:04:49.197049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.189 [2024-12-08 21:04:49.212452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.189 [2024-12-08 21:04:49.212504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:28.189 [2024-12-08 21:04:49.212521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.328 ms 00:18:28.189 [2024-12-08 21:04:49.212532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.189 [2024-12-08 21:04:49.212679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.189 [2024-12-08 21:04:49.212697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:28.189 [2024-12-08 21:04:49.212709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:28.189 [2024-12-08 21:04:49.212721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.450 [2024-12-08 21:04:49.238138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.450 [2024-12-08 21:04:49.238172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:28.450 [2024-12-08 21:04:49.238189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.379 ms 00:18:28.450 [2024-12-08 21:04:49.238198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.450 [2024-12-08 21:04:49.262324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.450 [2024-12-08 21:04:49.262359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:28.450 [2024-12-08 21:04:49.262393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.081 ms 00:18:28.450 [2024-12-08 21:04:49.262402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.450 [2024-12-08 21:04:49.286153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.450 [2024-12-08 21:04:49.286351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:28.450 [2024-12-08 21:04:49.286383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.703 ms 00:18:28.450 [2024-12-08 21:04:49.286395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.450 [2024-12-08 21:04:49.310640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.450 [2024-12-08 21:04:49.310675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:28.450 [2024-12-08 21:04:49.310708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.135 ms 00:18:28.450 [2024-12-08 21:04:49.310718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.450 [2024-12-08 21:04:49.310764] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:28.450 [2024-12-08 21:04:49.310787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.310997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:28.450 [2024-12-08 21:04:49.311508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.311994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.312008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.312019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.312031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:28.451 [2024-12-08 21:04:49.312049] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:28.451 [2024-12-08 21:04:49.312061] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97a1d5b0-5381-44ba-bfc8-56f53a684fe6 00:18:28.451 [2024-12-08 21:04:49.312072] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:28.451 [2024-12-08 21:04:49.312099] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:28.451 [2024-12-08 21:04:49.312109] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:28.451 [2024-12-08 21:04:49.312121] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:28.451 [2024-12-08 21:04:49.312131] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:28.451 [2024-12-08 21:04:49.312143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:28.451 [2024-12-08 21:04:49.312154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:28.451 [2024-12-08 21:04:49.312176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:28.451 [2024-12-08 21:04:49.312187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:28.451 [2024-12-08 21:04:49.312211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.451 [2024-12-08 21:04:49.312239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:28.451 [2024-12-08 21:04:49.312256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:18:28.451 [2024-12-08 21:04:49.312267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.327705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.451 [2024-12-08 21:04:49.327848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:28.451 [2024-12-08 21:04:49.327878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.391 ms 00:18:28.451 [2024-12-08 21:04:49.327891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.328159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.451 [2024-12-08 21:04:49.328218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:28.451 [2024-12-08 21:04:49.328252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:18:28.451 [2024-12-08 21:04:49.328273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.374856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.451 [2024-12-08 21:04:49.375008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:28.451 [2024-12-08 21:04:49.375037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.451 [2024-12-08 21:04:49.375049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.375155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.451 [2024-12-08 21:04:49.375174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:28.451 [2024-12-08 21:04:49.375187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.451 [2024-12-08 21:04:49.375198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.375290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.451 [2024-12-08 21:04:49.375309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:28.451 [2024-12-08 21:04:49.375322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.451 [2024-12-08 21:04:49.375347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.375372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.451 [2024-12-08 21:04:49.375384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:28.451 [2024-12-08 21:04:49.375414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.451 [2024-12-08 21:04:49.375423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.452179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.451 [2024-12-08 21:04:49.452468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:28.451 [2024-12-08 21:04:49.452502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.451 [2024-12-08 21:04:49.452515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.483425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.451 [2024-12-08 21:04:49.483568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:28.451 [2024-12-08 21:04:49.483599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.451 [2024-12-08 21:04:49.483611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.483697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.451 [2024-12-08 21:04:49.483713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:28.451 [2024-12-08 21:04:49.483726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.451 [2024-12-08 21:04:49.483736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.483795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.451 [2024-12-08 21:04:49.483810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:28.451 [2024-12-08 21:04:49.483823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.451 [2024-12-08 21:04:49.483835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.451 [2024-12-08 21:04:49.483991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.452 [2024-12-08 21:04:49.484008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:28.452 [2024-12-08 21:04:49.484021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.452 [2024-12-08 21:04:49.484031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.452 [2024-12-08 21:04:49.484081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.452 [2024-12-08 21:04:49.484096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:28.452 [2024-12-08 21:04:49.484108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.452 [2024-12-08 21:04:49.484158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.452 [2024-12-08 21:04:49.484284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.452 [2024-12-08 21:04:49.484317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:28.452 [2024-12-08 21:04:49.484330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.452 [2024-12-08 21:04:49.484341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.452 [2024-12-08 21:04:49.484396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:28.452 [2024-12-08 21:04:49.484412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:28.452 [2024-12-08 21:04:49.484425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:28.452 [2024-12-08 21:04:49.484438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.452 [2024-12-08 21:04:49.484662] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 323.590 ms, result 0 00:18:28.452 true 00:18:28.711 21:04:49 -- ftl/restore.sh@66 -- # killprocess 73400 00:18:28.711 21:04:49 -- common/autotest_common.sh@936 -- # '[' -z 73400 ']' 00:18:28.711 21:04:49 -- common/autotest_common.sh@940 -- # kill -0 73400 00:18:28.711 21:04:49 -- common/autotest_common.sh@941 -- # uname 00:18:28.711 21:04:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:28.711 21:04:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73400 00:18:28.711 killing process with pid 73400 00:18:28.711 21:04:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:28.711 21:04:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:28.711 21:04:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73400' 00:18:28.711 21:04:49 -- common/autotest_common.sh@955 -- # kill 73400 00:18:28.711 21:04:49 -- common/autotest_common.sh@960 -- # wait 73400 00:18:32.903 21:04:53 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:37.086 262144+0 records in 00:18:37.086 262144+0 records out 00:18:37.086 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.92386 s, 274 MB/s 00:18:37.086 21:04:57 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:38.467 21:04:59 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:38.467 [2024-12-08 21:04:59.327690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:38.467 [2024-12-08 21:04:59.327855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73638 ] 00:18:38.467 [2024-12-08 21:04:59.497405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.732 [2024-12-08 21:04:59.670022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.991 [2024-12-08 21:04:59.916098] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:38.991 [2024-12-08 21:04:59.916171] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:39.252 [2024-12-08 21:05:00.064957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.252 [2024-12-08 21:05:00.065246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:39.252 [2024-12-08 21:05:00.065278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:39.252 [2024-12-08 21:05:00.065297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.252 [2024-12-08 21:05:00.065365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.252 [2024-12-08 21:05:00.065382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.252 [2024-12-08 21:05:00.065394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:39.252 [2024-12-08 21:05:00.065405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.252 [2024-12-08 21:05:00.065433] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:39.252 [2024-12-08 21:05:00.066230] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:39.252 [2024-12-08 21:05:00.066252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.252 [2024-12-08 21:05:00.066263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.252 [2024-12-08 21:05:00.066273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:18:39.252 [2024-12-08 21:05:00.066284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.252 [2024-12-08 21:05:00.067281] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:39.252 [2024-12-08 21:05:00.080293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.252 [2024-12-08 21:05:00.080464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:39.252 [2024-12-08 21:05:00.080491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.013 ms 00:18:39.252 [2024-12-08 21:05:00.080504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.252 [2024-12-08 21:05:00.080612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.252 [2024-12-08 21:05:00.080629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:39.252 [2024-12-08 21:05:00.080640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:39.252 [2024-12-08 21:05:00.080650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.252 [2024-12-08 21:05:00.084595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.252 [2024-12-08 21:05:00.084629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.252 [2024-12-08 21:05:00.084643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.865 ms 00:18:39.252 [2024-12-08 21:05:00.084652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.252 [2024-12-08 21:05:00.084739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.253 [2024-12-08 21:05:00.084755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.253 [2024-12-08 21:05:00.084765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:39.253 [2024-12-08 21:05:00.084775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.253 [2024-12-08 21:05:00.084821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.253 [2024-12-08 21:05:00.084833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:39.253 [2024-12-08 21:05:00.084843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:39.253 [2024-12-08 21:05:00.084852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.253 [2024-12-08 21:05:00.084884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:39.253 [2024-12-08 21:05:00.088433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.253 [2024-12-08 21:05:00.088468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.253 [2024-12-08 21:05:00.088499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.562 ms 00:18:39.253 [2024-12-08 21:05:00.088510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.253 [2024-12-08 21:05:00.088576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.253 [2024-12-08 21:05:00.088589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:39.253 [2024-12-08 21:05:00.088615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:39.253 [2024-12-08 21:05:00.088627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.253 [2024-12-08 21:05:00.088650] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:39.253 [2024-12-08 21:05:00.088671] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:39.253 [2024-12-08 21:05:00.088704] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:39.253 [2024-12-08 21:05:00.088721] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:39.253 [2024-12-08 21:05:00.088784] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:39.253 [2024-12-08 21:05:00.088797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:39.253 [2024-12-08 21:05:00.088813] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:39.253 [2024-12-08 21:05:00.088825] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:39.253 [2024-12-08 21:05:00.088835] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:39.253 [2024-12-08 21:05:00.088846] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:39.253 [2024-12-08 21:05:00.088855] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:39.253 [2024-12-08 21:05:00.088863] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:39.253 [2024-12-08 21:05:00.088871] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:39.253 [2024-12-08 21:05:00.088881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.253 [2024-12-08 21:05:00.088889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:39.253 [2024-12-08 21:05:00.088898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:18:39.253 [2024-12-08 21:05:00.088907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.253 [2024-12-08 21:05:00.088965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.253 [2024-12-08 21:05:00.088977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:39.253 [2024-12-08 21:05:00.088986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:39.253 [2024-12-08 21:05:00.088995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.253 [2024-12-08 21:05:00.089074] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:39.253 [2024-12-08 21:05:00.089089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:39.253 [2024-12-08 21:05:00.089099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:39.253 [2024-12-08 21:05:00.089109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:39.253 [2024-12-08 21:05:00.089126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:39.253 [2024-12-08 21:05:00.089184] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:39.253 [2024-12-08 21:05:00.089194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:39.253 [2024-12-08 21:05:00.089212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:39.253 [2024-12-08 21:05:00.089222] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:39.253 [2024-12-08 21:05:00.089231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:39.253 [2024-12-08 21:05:00.089239] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:39.253 [2024-12-08 21:05:00.089248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:39.253 [2024-12-08 21:05:00.089257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:39.253 [2024-12-08 21:05:00.089287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:39.253 [2024-12-08 21:05:00.089296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089306] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:39.253 [2024-12-08 21:05:00.089314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:39.253 [2024-12-08 21:05:00.089324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:39.253 [2024-12-08 21:05:00.089332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:39.253 [2024-12-08 21:05:00.089341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:39.253 [2024-12-08 21:05:00.089359] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:39.253 [2024-12-08 21:05:00.089367] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:39.253 [2024-12-08 21:05:00.089384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:39.253 [2024-12-08 21:05:00.089393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:39.253 [2024-12-08 21:05:00.089410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:39.253 [2024-12-08 21:05:00.089419] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:39.253 [2024-12-08 21:05:00.089436] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:39.253 [2024-12-08 21:05:00.089445] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:39.253 [2024-12-08 21:05:00.089463] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:39.253 [2024-12-08 21:05:00.089472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:39.253 [2024-12-08 21:05:00.089481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:39.253 [2024-12-08 21:05:00.089505] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:39.253 [2024-12-08 21:05:00.089533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:39.253 [2024-12-08 21:05:00.089542] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:39.253 [2024-12-08 21:05:00.089552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.253 [2024-12-08 21:05:00.089562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:39.253 [2024-12-08 21:05:00.089571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:39.253 [2024-12-08 21:05:00.089585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:39.253 [2024-12-08 21:05:00.089598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:39.253 [2024-12-08 21:05:00.089611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:39.253 [2024-12-08 21:05:00.089624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:39.253 [2024-12-08 21:05:00.089638] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:39.253 [2024-12-08 21:05:00.089651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:39.253 [2024-12-08 21:05:00.089661] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:39.253 [2024-12-08 21:05:00.089671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:39.253 [2024-12-08 21:05:00.089681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:39.253 [2024-12-08 21:05:00.089691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:39.253 [2024-12-08 21:05:00.089700] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:39.254 [2024-12-08 21:05:00.089710] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:39.254 [2024-12-08 21:05:00.089719] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:39.254 [2024-12-08 21:05:00.089729] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:39.254 [2024-12-08 21:05:00.089738] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:39.254 [2024-12-08 21:05:00.089748] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:39.254 [2024-12-08 21:05:00.089758] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:39.254 [2024-12-08 21:05:00.089767] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:39.254 [2024-12-08 21:05:00.089778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:39.254 [2024-12-08 21:05:00.089787] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:39.254 [2024-12-08 21:05:00.089797] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:39.254 [2024-12-08 21:05:00.089808] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:39.254 [2024-12-08 21:05:00.089818] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:39.254 [2024-12-08 21:05:00.089828] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:39.254 [2024-12-08 21:05:00.089840] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:39.254 [2024-12-08 21:05:00.089851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.089860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:39.254 [2024-12-08 21:05:00.089870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:18:39.254 [2024-12-08 21:05:00.089879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.104730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.104769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:39.254 [2024-12-08 21:05:00.104784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.801 ms 00:18:39.254 [2024-12-08 21:05:00.104799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.104875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.104887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:39.254 [2024-12-08 21:05:00.104897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:39.254 [2024-12-08 21:05:00.104906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.141431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.141473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:39.254 [2024-12-08 21:05:00.141489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.466 ms 00:18:39.254 [2024-12-08 21:05:00.141499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.141543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.141556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:39.254 [2024-12-08 21:05:00.141566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:39.254 [2024-12-08 21:05:00.141575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.141888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.141904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:39.254 [2024-12-08 21:05:00.141915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:18:39.254 [2024-12-08 21:05:00.141928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.142039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.142054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:39.254 [2024-12-08 21:05:00.142063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:39.254 [2024-12-08 21:05:00.142107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.155873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.155909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:39.254 [2024-12-08 21:05:00.155924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.737 ms 00:18:39.254 [2024-12-08 21:05:00.155934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.168952] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:39.254 [2024-12-08 21:05:00.168990] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:39.254 [2024-12-08 21:05:00.169005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.169015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:39.254 [2024-12-08 21:05:00.169025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.973 ms 00:18:39.254 [2024-12-08 21:05:00.169034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.192096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.192132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:39.254 [2024-12-08 21:05:00.192147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.024 ms 00:18:39.254 [2024-12-08 21:05:00.192157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.204897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.204932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:39.254 [2024-12-08 21:05:00.204962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.660 ms 00:18:39.254 [2024-12-08 21:05:00.204971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.219706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.219879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:39.254 [2024-12-08 21:05:00.219918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.697 ms 00:18:39.254 [2024-12-08 21:05:00.219929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.220476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.220527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:39.254 [2024-12-08 21:05:00.220554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:18:39.254 [2024-12-08 21:05:00.220564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.254 [2024-12-08 21:05:00.286235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.254 [2024-12-08 21:05:00.286299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:39.254 [2024-12-08 21:05:00.286315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.634 ms 00:18:39.254 [2024-12-08 21:05:00.286326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.514 [2024-12-08 21:05:00.297774] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:39.514 [2024-12-08 21:05:00.300002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.514 [2024-12-08 21:05:00.300044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:39.514 [2024-12-08 21:05:00.300059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.622 ms 00:18:39.514 [2024-12-08 21:05:00.300069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.515 [2024-12-08 21:05:00.300168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.515 [2024-12-08 21:05:00.300227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:39.515 [2024-12-08 21:05:00.300256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:39.515 [2024-12-08 21:05:00.300266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.515 [2024-12-08 21:05:00.300345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.515 [2024-12-08 21:05:00.300377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:39.515 [2024-12-08 21:05:00.300388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:39.515 [2024-12-08 21:05:00.300399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.515 [2024-12-08 21:05:00.302068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.515 [2024-12-08 21:05:00.302115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:39.515 [2024-12-08 21:05:00.302127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:18:39.515 [2024-12-08 21:05:00.302137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.515 [2024-12-08 21:05:00.302167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.515 [2024-12-08 21:05:00.302178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:39.515 [2024-12-08 21:05:00.302189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:39.515 [2024-12-08 21:05:00.302204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.515 [2024-12-08 21:05:00.302240] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:39.515 [2024-12-08 21:05:00.302254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.515 [2024-12-08 21:05:00.302263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:39.515 [2024-12-08 21:05:00.302276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:39.515 [2024-12-08 21:05:00.302300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.515 [2024-12-08 21:05:00.327863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.515 [2024-12-08 21:05:00.327914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:39.515 [2024-12-08 21:05:00.327928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.526 ms 00:18:39.515 [2024-12-08 21:05:00.327939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.515 [2024-12-08 21:05:00.328010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.515 [2024-12-08 21:05:00.328033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:39.515 [2024-12-08 21:05:00.328045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:39.515 [2024-12-08 21:05:00.328055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.515 [2024-12-08 21:05:00.329390] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.864 ms, result 0 00:18:40.452  [2024-12-08T21:05:02.432Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-08T21:05:03.369Z] Copying: 47/1024 [MB] (23 MBps) [2024-12-08T21:05:04.750Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-08T21:05:05.686Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-08T21:05:06.622Z] Copying: 117/1024 [MB] (23 MBps) [2024-12-08T21:05:07.560Z] Copying: 141/1024 [MB] (23 MBps) [2024-12-08T21:05:08.498Z] Copying: 165/1024 [MB] (24 MBps) [2024-12-08T21:05:09.436Z] Copying: 189/1024 [MB] (23 MBps) [2024-12-08T21:05:10.374Z] Copying: 212/1024 [MB] (23 MBps) [2024-12-08T21:05:11.753Z] Copying: 236/1024 [MB] (23 MBps) [2024-12-08T21:05:12.689Z] Copying: 260/1024 [MB] (23 MBps) [2024-12-08T21:05:13.626Z] Copying: 283/1024 [MB] (23 MBps) [2024-12-08T21:05:14.576Z] Copying: 307/1024 [MB] (23 MBps) [2024-12-08T21:05:15.510Z] Copying: 331/1024 [MB] (23 MBps) [2024-12-08T21:05:16.445Z] Copying: 355/1024 [MB] (23 MBps) [2024-12-08T21:05:17.382Z] Copying: 379/1024 [MB] (23 MBps) [2024-12-08T21:05:18.762Z] Copying: 402/1024 [MB] (23 MBps) [2024-12-08T21:05:19.756Z] Copying: 425/1024 [MB] (23 MBps) [2024-12-08T21:05:20.361Z] Copying: 450/1024 [MB] (24 MBps) [2024-12-08T21:05:21.739Z] Copying: 473/1024 [MB] (23 MBps) [2024-12-08T21:05:22.676Z] Copying: 497/1024 [MB] (23 MBps) [2024-12-08T21:05:23.614Z] Copying: 521/1024 [MB] (24 MBps) [2024-12-08T21:05:24.553Z] Copying: 545/1024 [MB] (24 MBps) [2024-12-08T21:05:25.489Z] Copying: 570/1024 [MB] (24 MBps) [2024-12-08T21:05:26.427Z] Copying: 593/1024 [MB] (23 MBps) [2024-12-08T21:05:27.366Z] Copying: 617/1024 [MB] (23 MBps) [2024-12-08T21:05:28.744Z] Copying: 640/1024 [MB] (23 MBps) [2024-12-08T21:05:29.681Z] Copying: 663/1024 [MB] (22 MBps) [2024-12-08T21:05:30.617Z] Copying: 687/1024 [MB] (23 MBps) [2024-12-08T21:05:31.552Z] Copying: 710/1024 [MB] (23 MBps) [2024-12-08T21:05:32.488Z] Copying: 734/1024 [MB] (23 MBps) [2024-12-08T21:05:33.426Z] Copying: 757/1024 [MB] (23 MBps) [2024-12-08T21:05:34.361Z] Copying: 781/1024 [MB] (23 MBps) [2024-12-08T21:05:35.735Z] Copying: 805/1024 [MB] (23 MBps) [2024-12-08T21:05:36.673Z] Copying: 829/1024 [MB] (23 MBps) [2024-12-08T21:05:37.611Z] Copying: 852/1024 [MB] (23 MBps) [2024-12-08T21:05:38.549Z] Copying: 876/1024 [MB] (23 MBps) [2024-12-08T21:05:39.488Z] Copying: 900/1024 [MB] (23 MBps) [2024-12-08T21:05:40.427Z] Copying: 923/1024 [MB] (23 MBps) [2024-12-08T21:05:41.364Z] Copying: 947/1024 [MB] (23 MBps) [2024-12-08T21:05:42.745Z] Copying: 970/1024 [MB] (23 MBps) [2024-12-08T21:05:43.685Z] Copying: 994/1024 [MB] (23 MBps) [2024-12-08T21:05:43.685Z] Copying: 1017/1024 [MB] (23 MBps) [2024-12-08T21:05:43.685Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-08 21:05:43.614365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.642 [2024-12-08 21:05:43.614422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.642 [2024-12-08 21:05:43.614440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.642 [2024-12-08 21:05:43.614451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.642 [2024-12-08 21:05:43.614478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.642 [2024-12-08 21:05:43.617286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.642 [2024-12-08 21:05:43.617325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.642 [2024-12-08 21:05:43.617346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.787 ms 00:19:22.642 [2024-12-08 21:05:43.617356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.642 [2024-12-08 21:05:43.618909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.642 [2024-12-08 21:05:43.618959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.642 [2024-12-08 21:05:43.619003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.526 ms 00:19:22.642 [2024-12-08 21:05:43.619030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.642 [2024-12-08 21:05:43.634398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.642 [2024-12-08 21:05:43.634445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.642 [2024-12-08 21:05:43.634460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.349 ms 00:19:22.642 [2024-12-08 21:05:43.634477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.642 [2024-12-08 21:05:43.639774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.642 [2024-12-08 21:05:43.639801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:22.642 [2024-12-08 21:05:43.639813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.262 ms 00:19:22.642 [2024-12-08 21:05:43.639823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.642 [2024-12-08 21:05:43.664457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.642 [2024-12-08 21:05:43.664521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.642 [2024-12-08 21:05:43.664536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.588 ms 00:19:22.642 [2024-12-08 21:05:43.664545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.642 [2024-12-08 21:05:43.679313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.642 [2024-12-08 21:05:43.679347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.642 [2024-12-08 21:05:43.679361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.717 ms 00:19:22.642 [2024-12-08 21:05:43.679371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.642 [2024-12-08 21:05:43.679513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.642 [2024-12-08 21:05:43.679531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.642 [2024-12-08 21:05:43.679542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:22.642 [2024-12-08 21:05:43.679552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.902 [2024-12-08 21:05:43.705637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.902 [2024-12-08 21:05:43.705668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:22.902 [2024-12-08 21:05:43.705681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.069 ms 00:19:22.902 [2024-12-08 21:05:43.705690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.902 [2024-12-08 21:05:43.730232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.902 [2024-12-08 21:05:43.730265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:22.902 [2024-12-08 21:05:43.730278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.506 ms 00:19:22.902 [2024-12-08 21:05:43.730301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.902 [2024-12-08 21:05:43.754191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.902 [2024-12-08 21:05:43.754224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.902 [2024-12-08 21:05:43.754238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.853 ms 00:19:22.902 [2024-12-08 21:05:43.754247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.902 [2024-12-08 21:05:43.778331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.902 [2024-12-08 21:05:43.778362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.902 [2024-12-08 21:05:43.778375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.012 ms 00:19:22.902 [2024-12-08 21:05:43.778385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.902 [2024-12-08 21:05:43.778421] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.902 [2024-12-08 21:05:43.778440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.902 [2024-12-08 21:05:43.778459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.902 [2024-12-08 21:05:43.778469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.778999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.903 [2024-12-08 21:05:43.779330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.904 [2024-12-08 21:05:43.779445] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.904 [2024-12-08 21:05:43.779455] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97a1d5b0-5381-44ba-bfc8-56f53a684fe6 00:19:22.904 [2024-12-08 21:05:43.779464] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.904 [2024-12-08 21:05:43.779473] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.904 [2024-12-08 21:05:43.779496] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.904 [2024-12-08 21:05:43.779505] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.904 [2024-12-08 21:05:43.779514] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.904 [2024-12-08 21:05:43.779523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.904 [2024-12-08 21:05:43.779532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.904 [2024-12-08 21:05:43.779540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.904 [2024-12-08 21:05:43.779558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.904 [2024-12-08 21:05:43.779568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.904 [2024-12-08 21:05:43.779577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.904 [2024-12-08 21:05:43.779587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:19:22.904 [2024-12-08 21:05:43.779599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.792500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.904 [2024-12-08 21:05:43.792544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.904 [2024-12-08 21:05:43.792572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.867 ms 00:19:22.904 [2024-12-08 21:05:43.792581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.792786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.904 [2024-12-08 21:05:43.792800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.904 [2024-12-08 21:05:43.792817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:19:22.904 [2024-12-08 21:05:43.792826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.827899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.827933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.904 [2024-12-08 21:05:43.827946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.827956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.828003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.828015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.904 [2024-12-08 21:05:43.828030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.828039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.828140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.828185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.904 [2024-12-08 21:05:43.828195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.828206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.828241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.828252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.904 [2024-12-08 21:05:43.828263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.828278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.903585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.903651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.904 [2024-12-08 21:05:43.903665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.903676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.933773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.933803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.904 [2024-12-08 21:05:43.933816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.933832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.933898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.933913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.904 [2024-12-08 21:05:43.933922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.933931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.933976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.933989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.904 [2024-12-08 21:05:43.933998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.934007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.934175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.934194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.904 [2024-12-08 21:05:43.934205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.934215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.934260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.934275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.904 [2024-12-08 21:05:43.934286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.934296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.934340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.934353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.904 [2024-12-08 21:05:43.934364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.934373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.934420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.904 [2024-12-08 21:05:43.934435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.904 [2024-12-08 21:05:43.934446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.904 [2024-12-08 21:05:43.934470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.904 [2024-12-08 21:05:43.934624] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 320.248 ms, result 0 00:19:23.841 00:19:23.841 00:19:23.842 21:05:44 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:24.102 [2024-12-08 21:05:44.942479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:24.102 [2024-12-08 21:05:44.942639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74101 ] 00:19:24.102 [2024-12-08 21:05:45.110515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.360 [2024-12-08 21:05:45.258624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.618 [2024-12-08 21:05:45.506746] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.618 [2024-12-08 21:05:45.506814] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.618 [2024-12-08 21:05:45.655915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.618 [2024-12-08 21:05:45.655956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:24.618 [2024-12-08 21:05:45.655974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:24.618 [2024-12-08 21:05:45.655989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.618 [2024-12-08 21:05:45.656062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.618 [2024-12-08 21:05:45.656095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.618 [2024-12-08 21:05:45.656170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:24.618 [2024-12-08 21:05:45.656201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.618 [2024-12-08 21:05:45.656236] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:24.618 [2024-12-08 21:05:45.657208] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:24.618 [2024-12-08 21:05:45.657255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.618 [2024-12-08 21:05:45.657269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.618 [2024-12-08 21:05:45.657282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:19:24.618 [2024-12-08 21:05:45.657292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.618 [2024-12-08 21:05:45.658493] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:24.879 [2024-12-08 21:05:45.674449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.879 [2024-12-08 21:05:45.674529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:24.879 [2024-12-08 21:05:45.674545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.958 ms 00:19:24.879 [2024-12-08 21:05:45.674555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.879 [2024-12-08 21:05:45.674617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.879 [2024-12-08 21:05:45.674634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:24.879 [2024-12-08 21:05:45.674645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:24.879 [2024-12-08 21:05:45.674655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.879 [2024-12-08 21:05:45.679376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.879 [2024-12-08 21:05:45.679426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.879 [2024-12-08 21:05:45.679471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.639 ms 00:19:24.879 [2024-12-08 21:05:45.679497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.879 [2024-12-08 21:05:45.679592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.879 [2024-12-08 21:05:45.679611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.879 [2024-12-08 21:05:45.679623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:24.879 [2024-12-08 21:05:45.679633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.879 [2024-12-08 21:05:45.679689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.879 [2024-12-08 21:05:45.679705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:24.879 [2024-12-08 21:05:45.679716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:24.879 [2024-12-08 21:05:45.679726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.879 [2024-12-08 21:05:45.679766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.879 [2024-12-08 21:05:45.683821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.879 [2024-12-08 21:05:45.683867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.879 [2024-12-08 21:05:45.683880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.072 ms 00:19:24.879 [2024-12-08 21:05:45.683890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.879 [2024-12-08 21:05:45.683928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.879 [2024-12-08 21:05:45.683943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:24.879 [2024-12-08 21:05:45.683954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:24.879 [2024-12-08 21:05:45.683969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.879 [2024-12-08 21:05:45.684008] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:24.879 [2024-12-08 21:05:45.684035] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:24.879 [2024-12-08 21:05:45.684069] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:24.879 [2024-12-08 21:05:45.684132] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:24.879 [2024-12-08 21:05:45.684243] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:24.879 [2024-12-08 21:05:45.684260] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:24.879 [2024-12-08 21:05:45.684281] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:24.879 [2024-12-08 21:05:45.684296] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:24.879 [2024-12-08 21:05:45.684310] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:24.879 [2024-12-08 21:05:45.684323] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:24.879 [2024-12-08 21:05:45.684334] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:24.879 [2024-12-08 21:05:45.684345] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:24.879 [2024-12-08 21:05:45.684356] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:24.879 [2024-12-08 21:05:45.684368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.879 [2024-12-08 21:05:45.684380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:24.879 [2024-12-08 21:05:45.684392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:19:24.879 [2024-12-08 21:05:45.684404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.879 [2024-12-08 21:05:45.684536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.879 [2024-12-08 21:05:45.684580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:24.879 [2024-12-08 21:05:45.684591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:24.879 [2024-12-08 21:05:45.684601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.879 [2024-12-08 21:05:45.684697] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:24.879 [2024-12-08 21:05:45.684714] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:24.879 [2024-12-08 21:05:45.684725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.879 [2024-12-08 21:05:45.684736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.879 [2024-12-08 21:05:45.684748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:24.879 [2024-12-08 21:05:45.684757] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:24.879 [2024-12-08 21:05:45.684767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:24.879 [2024-12-08 21:05:45.684777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:24.879 [2024-12-08 21:05:45.684786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:24.879 [2024-12-08 21:05:45.684796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.879 [2024-12-08 21:05:45.684805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:24.879 [2024-12-08 21:05:45.684814] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:24.879 [2024-12-08 21:05:45.684823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.879 [2024-12-08 21:05:45.684833] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:24.879 [2024-12-08 21:05:45.684842] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:24.879 [2024-12-08 21:05:45.684851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.879 [2024-12-08 21:05:45.684873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:24.879 [2024-12-08 21:05:45.684883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:24.879 [2024-12-08 21:05:45.684893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.879 [2024-12-08 21:05:45.684902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:24.879 [2024-12-08 21:05:45.684911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:24.879 [2024-12-08 21:05:45.684921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:24.879 [2024-12-08 21:05:45.684930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:24.880 [2024-12-08 21:05:45.684939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:24.880 [2024-12-08 21:05:45.684949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:24.880 [2024-12-08 21:05:45.684958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:24.880 [2024-12-08 21:05:45.684967] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:24.880 [2024-12-08 21:05:45.684976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:24.880 [2024-12-08 21:05:45.684985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:24.880 [2024-12-08 21:05:45.684994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:24.880 [2024-12-08 21:05:45.685003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:24.880 [2024-12-08 21:05:45.685012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:24.880 [2024-12-08 21:05:45.685021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:24.880 [2024-12-08 21:05:45.685030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:24.880 [2024-12-08 21:05:45.685039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:24.880 [2024-12-08 21:05:45.685049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:24.880 [2024-12-08 21:05:45.685060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.880 [2024-12-08 21:05:45.685069] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:24.880 [2024-12-08 21:05:45.685094] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:24.880 [2024-12-08 21:05:45.685104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.880 [2024-12-08 21:05:45.685113] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:24.880 [2024-12-08 21:05:45.685129] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:24.880 [2024-12-08 21:05:45.685139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.880 [2024-12-08 21:05:45.685149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.880 [2024-12-08 21:05:45.685160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:24.880 [2024-12-08 21:05:45.685170] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:24.880 [2024-12-08 21:05:45.685179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:24.880 [2024-12-08 21:05:45.685203] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:24.880 [2024-12-08 21:05:45.685230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:24.880 [2024-12-08 21:05:45.685240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:24.880 [2024-12-08 21:05:45.685251] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:24.880 [2024-12-08 21:05:45.685264] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.880 [2024-12-08 21:05:45.685291] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:24.880 [2024-12-08 21:05:45.685301] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:24.880 [2024-12-08 21:05:45.685312] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:24.880 [2024-12-08 21:05:45.685322] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:24.880 [2024-12-08 21:05:45.685332] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:24.880 [2024-12-08 21:05:45.685343] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:24.880 [2024-12-08 21:05:45.685353] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:24.880 [2024-12-08 21:05:45.685363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:24.880 [2024-12-08 21:05:45.685373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:24.880 [2024-12-08 21:05:45.685383] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:24.880 [2024-12-08 21:05:45.685394] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:24.880 [2024-12-08 21:05:45.685404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:24.880 [2024-12-08 21:05:45.685430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:24.880 [2024-12-08 21:05:45.685440] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:24.880 [2024-12-08 21:05:45.685451] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.880 [2024-12-08 21:05:45.685462] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:24.880 [2024-12-08 21:05:45.685473] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:24.880 [2024-12-08 21:05:45.685483] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:24.880 [2024-12-08 21:05:45.685494] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:24.880 [2024-12-08 21:05:45.685506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.685516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:24.880 [2024-12-08 21:05:45.685527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:19:24.880 [2024-12-08 21:05:45.685538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.701509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.701545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.880 [2024-12-08 21:05:45.701561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.916 ms 00:19:24.880 [2024-12-08 21:05:45.701578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.701661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.701676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:24.880 [2024-12-08 21:05:45.701688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:24.880 [2024-12-08 21:05:45.701697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.746917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.746963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.880 [2024-12-08 21:05:45.746979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.150 ms 00:19:24.880 [2024-12-08 21:05:45.746991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.747045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.747076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.880 [2024-12-08 21:05:45.747099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.880 [2024-12-08 21:05:45.747113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.747508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.747526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.880 [2024-12-08 21:05:45.747539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:19:24.880 [2024-12-08 21:05:45.747583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.747726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.747744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.880 [2024-12-08 21:05:45.747755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:19:24.880 [2024-12-08 21:05:45.747766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.762787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.762823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.880 [2024-12-08 21:05:45.762838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.996 ms 00:19:24.880 [2024-12-08 21:05:45.762849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.776342] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:24.880 [2024-12-08 21:05:45.776393] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:24.880 [2024-12-08 21:05:45.776409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.776421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:24.880 [2024-12-08 21:05:45.776433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.455 ms 00:19:24.880 [2024-12-08 21:05:45.776443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.801189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.801253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:24.880 [2024-12-08 21:05:45.801269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.683 ms 00:19:24.880 [2024-12-08 21:05:45.801279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.814392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.814441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:24.880 [2024-12-08 21:05:45.814455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.063 ms 00:19:24.880 [2024-12-08 21:05:45.814465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.827303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.827362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:24.880 [2024-12-08 21:05:45.827376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.799 ms 00:19:24.880 [2024-12-08 21:05:45.827386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.880 [2024-12-08 21:05:45.827765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.880 [2024-12-08 21:05:45.827785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:24.880 [2024-12-08 21:05:45.827797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:19:24.880 [2024-12-08 21:05:45.827807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.881 [2024-12-08 21:05:45.887842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.881 [2024-12-08 21:05:45.887898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:24.881 [2024-12-08 21:05:45.887914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.013 ms 00:19:24.881 [2024-12-08 21:05:45.887924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.881 [2024-12-08 21:05:45.897938] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:24.881 [2024-12-08 21:05:45.899880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.881 [2024-12-08 21:05:45.899907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:24.881 [2024-12-08 21:05:45.899922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.906 ms 00:19:24.881 [2024-12-08 21:05:45.899937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.881 [2024-12-08 21:05:45.900013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.881 [2024-12-08 21:05:45.900031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:24.881 [2024-12-08 21:05:45.900042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:24.881 [2024-12-08 21:05:45.900052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.881 [2024-12-08 21:05:45.900179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.881 [2024-12-08 21:05:45.900213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:24.881 [2024-12-08 21:05:45.900226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:24.881 [2024-12-08 21:05:45.900236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.881 [2024-12-08 21:05:45.901888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.881 [2024-12-08 21:05:45.901914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:24.881 [2024-12-08 21:05:45.901926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.618 ms 00:19:24.881 [2024-12-08 21:05:45.901935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.881 [2024-12-08 21:05:45.901965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.881 [2024-12-08 21:05:45.901978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:24.881 [2024-12-08 21:05:45.901995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:24.881 [2024-12-08 21:05:45.902005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.881 [2024-12-08 21:05:45.902042] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:24.881 [2024-12-08 21:05:45.902057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.881 [2024-12-08 21:05:45.902097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:24.881 [2024-12-08 21:05:45.902110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:24.881 [2024-12-08 21:05:45.902121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.140 [2024-12-08 21:05:45.927725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.140 [2024-12-08 21:05:45.927760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:25.140 [2024-12-08 21:05:45.927775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.581 ms 00:19:25.140 [2024-12-08 21:05:45.927785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.140 [2024-12-08 21:05:45.927859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.140 [2024-12-08 21:05:45.927875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:25.140 [2024-12-08 21:05:45.927886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:25.140 [2024-12-08 21:05:45.927896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.140 [2024-12-08 21:05:45.929154] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.691 ms, result 0 00:19:26.076  [2024-12-08T21:05:48.529Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-08T21:05:49.099Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-08T21:05:50.477Z] Copying: 67/1024 [MB] (22 MBps) [2024-12-08T21:05:51.415Z] Copying: 91/1024 [MB] (23 MBps) [2024-12-08T21:05:52.353Z] Copying: 113/1024 [MB] (22 MBps) [2024-12-08T21:05:53.289Z] Copying: 135/1024 [MB] (22 MBps) [2024-12-08T21:05:54.226Z] Copying: 157/1024 [MB] (22 MBps) [2024-12-08T21:05:55.164Z] Copying: 180/1024 [MB] (22 MBps) [2024-12-08T21:05:56.113Z] Copying: 203/1024 [MB] (22 MBps) [2024-12-08T21:05:57.486Z] Copying: 225/1024 [MB] (22 MBps) [2024-12-08T21:05:58.421Z] Copying: 248/1024 [MB] (22 MBps) [2024-12-08T21:05:59.357Z] Copying: 270/1024 [MB] (22 MBps) [2024-12-08T21:06:00.296Z] Copying: 293/1024 [MB] (22 MBps) [2024-12-08T21:06:01.230Z] Copying: 315/1024 [MB] (22 MBps) [2024-12-08T21:06:02.166Z] Copying: 337/1024 [MB] (22 MBps) [2024-12-08T21:06:03.127Z] Copying: 360/1024 [MB] (22 MBps) [2024-12-08T21:06:04.505Z] Copying: 382/1024 [MB] (21 MBps) [2024-12-08T21:06:05.440Z] Copying: 404/1024 [MB] (21 MBps) [2024-12-08T21:06:06.377Z] Copying: 425/1024 [MB] (21 MBps) [2024-12-08T21:06:07.314Z] Copying: 447/1024 [MB] (21 MBps) [2024-12-08T21:06:08.252Z] Copying: 469/1024 [MB] (21 MBps) [2024-12-08T21:06:09.191Z] Copying: 491/1024 [MB] (22 MBps) [2024-12-08T21:06:10.128Z] Copying: 513/1024 [MB] (21 MBps) [2024-12-08T21:06:11.508Z] Copying: 535/1024 [MB] (22 MBps) [2024-12-08T21:06:12.447Z] Copying: 558/1024 [MB] (22 MBps) [2024-12-08T21:06:13.385Z] Copying: 580/1024 [MB] (22 MBps) [2024-12-08T21:06:14.323Z] Copying: 602/1024 [MB] (22 MBps) [2024-12-08T21:06:15.259Z] Copying: 625/1024 [MB] (22 MBps) [2024-12-08T21:06:16.196Z] Copying: 647/1024 [MB] (21 MBps) [2024-12-08T21:06:17.133Z] Copying: 669/1024 [MB] (22 MBps) [2024-12-08T21:06:18.510Z] Copying: 692/1024 [MB] (22 MBps) [2024-12-08T21:06:19.487Z] Copying: 715/1024 [MB] (22 MBps) [2024-12-08T21:06:20.422Z] Copying: 737/1024 [MB] (22 MBps) [2024-12-08T21:06:21.437Z] Copying: 760/1024 [MB] (22 MBps) [2024-12-08T21:06:22.374Z] Copying: 783/1024 [MB] (22 MBps) [2024-12-08T21:06:23.315Z] Copying: 806/1024 [MB] (22 MBps) [2024-12-08T21:06:24.256Z] Copying: 828/1024 [MB] (22 MBps) [2024-12-08T21:06:25.198Z] Copying: 851/1024 [MB] (22 MBps) [2024-12-08T21:06:26.132Z] Copying: 873/1024 [MB] (21 MBps) [2024-12-08T21:06:27.509Z] Copying: 895/1024 [MB] (22 MBps) [2024-12-08T21:06:28.445Z] Copying: 917/1024 [MB] (22 MBps) [2024-12-08T21:06:29.380Z] Copying: 940/1024 [MB] (22 MBps) [2024-12-08T21:06:30.317Z] Copying: 963/1024 [MB] (22 MBps) [2024-12-08T21:06:31.254Z] Copying: 985/1024 [MB] (22 MBps) [2024-12-08T21:06:32.192Z] Copying: 1007/1024 [MB] (22 MBps) [2024-12-08T21:06:32.192Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-08 21:06:32.152820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.149 [2024-12-08 21:06:32.153175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:11.149 [2024-12-08 21:06:32.153320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.149 [2024-12-08 21:06:32.153372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.149 [2024-12-08 21:06:32.153507] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.149 [2024-12-08 21:06:32.156567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.149 [2024-12-08 21:06:32.156707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:11.149 [2024-12-08 21:06:32.156732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:20:11.149 [2024-12-08 21:06:32.156743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.149 [2024-12-08 21:06:32.156991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.149 [2024-12-08 21:06:32.157010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:11.149 [2024-12-08 21:06:32.157023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:20:11.149 [2024-12-08 21:06:32.157034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.149 [2024-12-08 21:06:32.160044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.149 [2024-12-08 21:06:32.160233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:11.149 [2024-12-08 21:06:32.160265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.992 ms 00:20:11.149 [2024-12-08 21:06:32.160277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.149 [2024-12-08 21:06:32.165913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.149 [2024-12-08 21:06:32.165943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:11.149 [2024-12-08 21:06:32.165972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.609 ms 00:20:11.149 [2024-12-08 21:06:32.165981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.409 [2024-12-08 21:06:32.192182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.409 [2024-12-08 21:06:32.192227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:11.409 [2024-12-08 21:06:32.192260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.134 ms 00:20:11.409 [2024-12-08 21:06:32.192271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.409 [2024-12-08 21:06:32.207638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.409 [2024-12-08 21:06:32.207676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:11.409 [2024-12-08 21:06:32.207708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.323 ms 00:20:11.409 [2024-12-08 21:06:32.207725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.409 [2024-12-08 21:06:32.207866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.409 [2024-12-08 21:06:32.207885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:11.409 [2024-12-08 21:06:32.207897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:11.409 [2024-12-08 21:06:32.207907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.409 [2024-12-08 21:06:32.233372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.409 [2024-12-08 21:06:32.233409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:11.409 [2024-12-08 21:06:32.233440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.446 ms 00:20:11.409 [2024-12-08 21:06:32.233451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.409 [2024-12-08 21:06:32.258364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.409 [2024-12-08 21:06:32.258534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:11.409 [2024-12-08 21:06:32.258574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.876 ms 00:20:11.409 [2024-12-08 21:06:32.258585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.409 [2024-12-08 21:06:32.283058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.409 [2024-12-08 21:06:32.283101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:11.409 [2024-12-08 21:06:32.283132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.432 ms 00:20:11.409 [2024-12-08 21:06:32.283141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.409 [2024-12-08 21:06:32.307727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.409 [2024-12-08 21:06:32.307764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:11.409 [2024-12-08 21:06:32.307795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.511 ms 00:20:11.409 [2024-12-08 21:06:32.307805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.409 [2024-12-08 21:06:32.307857] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:11.409 [2024-12-08 21:06:32.307882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.307895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.307905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.307914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.307924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.307933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.307943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.307952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.307962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.307971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.307996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:11.409 [2024-12-08 21:06:32.308291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.308985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.309001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.309011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.309022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.309033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.309044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.309055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.309065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.309075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.309086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:11.410 [2024-12-08 21:06:32.309104] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:11.410 [2024-12-08 21:06:32.309115] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97a1d5b0-5381-44ba-bfc8-56f53a684fe6 00:20:11.410 [2024-12-08 21:06:32.309126] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:11.410 [2024-12-08 21:06:32.309136] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:11.410 [2024-12-08 21:06:32.309145] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:11.410 [2024-12-08 21:06:32.309156] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:11.410 [2024-12-08 21:06:32.309165] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:11.410 [2024-12-08 21:06:32.309186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:11.410 [2024-12-08 21:06:32.309197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:11.410 [2024-12-08 21:06:32.309219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:11.410 [2024-12-08 21:06:32.309228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:11.410 [2024-12-08 21:06:32.309238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.410 [2024-12-08 21:06:32.309249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:11.410 [2024-12-08 21:06:32.309264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:20:11.410 [2024-12-08 21:06:32.309275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.410 [2024-12-08 21:06:32.322886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.410 [2024-12-08 21:06:32.322920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:11.410 [2024-12-08 21:06:32.322951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.576 ms 00:20:11.410 [2024-12-08 21:06:32.322960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.410 [2024-12-08 21:06:32.323224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.410 [2024-12-08 21:06:32.323252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:11.410 [2024-12-08 21:06:32.323265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:20:11.410 [2024-12-08 21:06:32.323275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.410 [2024-12-08 21:06:32.359389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.410 [2024-12-08 21:06:32.359428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.410 [2024-12-08 21:06:32.359459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.410 [2024-12-08 21:06:32.359469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.411 [2024-12-08 21:06:32.359524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.411 [2024-12-08 21:06:32.359544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.411 [2024-12-08 21:06:32.359555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.411 [2024-12-08 21:06:32.359564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.411 [2024-12-08 21:06:32.359643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.411 [2024-12-08 21:06:32.359661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.411 [2024-12-08 21:06:32.359673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.411 [2024-12-08 21:06:32.359682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.411 [2024-12-08 21:06:32.359702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.411 [2024-12-08 21:06:32.359713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.411 [2024-12-08 21:06:32.359730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.411 [2024-12-08 21:06:32.359740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.411 [2024-12-08 21:06:32.441121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.411 [2024-12-08 21:06:32.441182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.411 [2024-12-08 21:06:32.441199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.411 [2024-12-08 21:06:32.441210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.675 [2024-12-08 21:06:32.472690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.675 [2024-12-08 21:06:32.472846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.675 [2024-12-08 21:06:32.472878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.675 [2024-12-08 21:06:32.472890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.675 [2024-12-08 21:06:32.472967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.675 [2024-12-08 21:06:32.472983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.675 [2024-12-08 21:06:32.472994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.675 [2024-12-08 21:06:32.473004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.675 [2024-12-08 21:06:32.473052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.675 [2024-12-08 21:06:32.473066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.675 [2024-12-08 21:06:32.473115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.675 [2024-12-08 21:06:32.473134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.675 [2024-12-08 21:06:32.473278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.675 [2024-12-08 21:06:32.473296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.675 [2024-12-08 21:06:32.473308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.675 [2024-12-08 21:06:32.473318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.675 [2024-12-08 21:06:32.473363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.675 [2024-12-08 21:06:32.473380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:11.675 [2024-12-08 21:06:32.473391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.675 [2024-12-08 21:06:32.473401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.675 [2024-12-08 21:06:32.473461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.675 [2024-12-08 21:06:32.473475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.675 [2024-12-08 21:06:32.473486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.675 [2024-12-08 21:06:32.473496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.675 [2024-12-08 21:06:32.473557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.675 [2024-12-08 21:06:32.473578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.675 [2024-12-08 21:06:32.473590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.675 [2024-12-08 21:06:32.473605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.675 [2024-12-08 21:06:32.473732] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 320.885 ms, result 0 00:20:12.609 00:20:12.609 00:20:12.609 21:06:33 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:14.510 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:14.510 21:06:35 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:14.510 [2024-12-08 21:06:35.164986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:14.510 [2024-12-08 21:06:35.165193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74605 ] 00:20:14.510 [2024-12-08 21:06:35.328675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.510 [2024-12-08 21:06:35.475005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.768 [2024-12-08 21:06:35.727325] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.768 [2024-12-08 21:06:35.727391] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.028 [2024-12-08 21:06:35.876061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.028 [2024-12-08 21:06:35.876121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:15.028 [2024-12-08 21:06:35.876154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:15.028 [2024-12-08 21:06:35.876169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.028 [2024-12-08 21:06:35.876236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.028 [2024-12-08 21:06:35.876253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.028 [2024-12-08 21:06:35.876264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:15.028 [2024-12-08 21:06:35.876273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.028 [2024-12-08 21:06:35.876300] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:15.028 [2024-12-08 21:06:35.877016] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:15.028 [2024-12-08 21:06:35.877044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.028 [2024-12-08 21:06:35.877056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.028 [2024-12-08 21:06:35.877066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:20:15.028 [2024-12-08 21:06:35.877092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.028 [2024-12-08 21:06:35.878120] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:15.028 [2024-12-08 21:06:35.891033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.028 [2024-12-08 21:06:35.891081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:15.028 [2024-12-08 21:06:35.891115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.915 ms 00:20:15.028 [2024-12-08 21:06:35.891125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.028 [2024-12-08 21:06:35.891201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.028 [2024-12-08 21:06:35.891218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:15.028 [2024-12-08 21:06:35.891229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:15.028 [2024-12-08 21:06:35.891239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.028 [2024-12-08 21:06:35.895395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.028 [2024-12-08 21:06:35.895428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.028 [2024-12-08 21:06:35.895442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.088 ms 00:20:15.028 [2024-12-08 21:06:35.895452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.028 [2024-12-08 21:06:35.895538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.028 [2024-12-08 21:06:35.895555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.028 [2024-12-08 21:06:35.895565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:15.028 [2024-12-08 21:06:35.895574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.028 [2024-12-08 21:06:35.895639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.028 [2024-12-08 21:06:35.895654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:15.028 [2024-12-08 21:06:35.895665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:15.028 [2024-12-08 21:06:35.895674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.028 [2024-12-08 21:06:35.895710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.028 [2024-12-08 21:06:35.899122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.029 [2024-12-08 21:06:35.899153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.029 [2024-12-08 21:06:35.899183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.427 ms 00:20:15.029 [2024-12-08 21:06:35.899192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.029 [2024-12-08 21:06:35.899228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.029 [2024-12-08 21:06:35.899241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:15.029 [2024-12-08 21:06:35.899252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:15.029 [2024-12-08 21:06:35.899265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.029 [2024-12-08 21:06:35.899290] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:15.029 [2024-12-08 21:06:35.899315] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:15.029 [2024-12-08 21:06:35.899349] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:15.029 [2024-12-08 21:06:35.899366] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:15.029 [2024-12-08 21:06:35.899447] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:15.029 [2024-12-08 21:06:35.899459] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:15.029 [2024-12-08 21:06:35.899474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:15.029 [2024-12-08 21:06:35.899486] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:15.029 [2024-12-08 21:06:35.899497] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:15.029 [2024-12-08 21:06:35.899507] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:15.029 [2024-12-08 21:06:35.899515] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:15.029 [2024-12-08 21:06:35.899524] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:15.029 [2024-12-08 21:06:35.899532] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:15.029 [2024-12-08 21:06:35.899542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.029 [2024-12-08 21:06:35.899551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:15.029 [2024-12-08 21:06:35.899560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:15.029 [2024-12-08 21:06:35.899569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.029 [2024-12-08 21:06:35.899626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.029 [2024-12-08 21:06:35.899638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:15.029 [2024-12-08 21:06:35.899648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:15.029 [2024-12-08 21:06:35.899656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.029 [2024-12-08 21:06:35.899734] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:15.029 [2024-12-08 21:06:35.899751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:15.029 [2024-12-08 21:06:35.899761] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.029 [2024-12-08 21:06:35.899770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.029 [2024-12-08 21:06:35.899779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:15.029 [2024-12-08 21:06:35.899787] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:15.029 [2024-12-08 21:06:35.899795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:15.029 [2024-12-08 21:06:35.899805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:15.029 [2024-12-08 21:06:35.899813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:15.029 [2024-12-08 21:06:35.899821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.029 [2024-12-08 21:06:35.899829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:15.029 [2024-12-08 21:06:35.899838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:15.029 [2024-12-08 21:06:35.899846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.029 [2024-12-08 21:06:35.899854] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:15.029 [2024-12-08 21:06:35.899862] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:15.029 [2024-12-08 21:06:35.899870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.029 [2024-12-08 21:06:35.899890] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:15.029 [2024-12-08 21:06:35.899899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:15.029 [2024-12-08 21:06:35.899907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.029 [2024-12-08 21:06:35.899915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:15.029 [2024-12-08 21:06:35.899923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:15.029 [2024-12-08 21:06:35.899931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:15.029 [2024-12-08 21:06:35.899939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:15.029 [2024-12-08 21:06:35.899947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:15.029 [2024-12-08 21:06:35.899955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:15.029 [2024-12-08 21:06:35.899963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:15.029 [2024-12-08 21:06:35.899971] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:15.029 [2024-12-08 21:06:35.899979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:15.029 [2024-12-08 21:06:35.899987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:15.029 [2024-12-08 21:06:35.899994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:15.029 [2024-12-08 21:06:35.900002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:15.029 [2024-12-08 21:06:35.900010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:15.029 [2024-12-08 21:06:35.900019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:15.029 [2024-12-08 21:06:35.900027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:15.029 [2024-12-08 21:06:35.900035] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:15.029 [2024-12-08 21:06:35.900042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:15.029 [2024-12-08 21:06:35.900050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.029 [2024-12-08 21:06:35.900058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:15.029 [2024-12-08 21:06:35.900066] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:15.029 [2024-12-08 21:06:35.900074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.029 [2024-12-08 21:06:35.900082] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:15.029 [2024-12-08 21:06:35.900094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:15.029 [2024-12-08 21:06:35.900479] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.029 [2024-12-08 21:06:35.900554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.029 [2024-12-08 21:06:35.900587] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:15.029 [2024-12-08 21:06:35.900686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:15.029 [2024-12-08 21:06:35.900745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:15.029 [2024-12-08 21:06:35.900778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:15.029 [2024-12-08 21:06:35.900811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:15.029 [2024-12-08 21:06:35.900943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:15.029 [2024-12-08 21:06:35.900991] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:15.029 [2024-12-08 21:06:35.901149] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.029 [2024-12-08 21:06:35.901219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:15.029 [2024-12-08 21:06:35.901347] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:15.029 [2024-12-08 21:06:35.901401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:15.029 [2024-12-08 21:06:35.901583] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:15.029 [2024-12-08 21:06:35.901615] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:15.029 [2024-12-08 21:06:35.901626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:15.029 [2024-12-08 21:06:35.901642] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:15.029 [2024-12-08 21:06:35.901652] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:15.029 [2024-12-08 21:06:35.901666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:15.029 [2024-12-08 21:06:35.901679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:15.029 [2024-12-08 21:06:35.901691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:15.029 [2024-12-08 21:06:35.901701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:15.029 [2024-12-08 21:06:35.901718] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:15.029 [2024-12-08 21:06:35.901731] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:15.029 [2024-12-08 21:06:35.901742] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.029 [2024-12-08 21:06:35.901758] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:15.029 [2024-12-08 21:06:35.901770] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:15.029 [2024-12-08 21:06:35.901783] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:15.030 [2024-12-08 21:06:35.901799] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:15.030 [2024-12-08 21:06:35.901810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:35.901821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:15.030 [2024-12-08 21:06:35.901832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.110 ms 00:20:15.030 [2024-12-08 21:06:35.901841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:35.916712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:35.916860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.030 [2024-12-08 21:06:35.916885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.791 ms 00:20:15.030 [2024-12-08 21:06:35.916902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:35.916982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:35.916997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:15.030 [2024-12-08 21:06:35.917007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:15.030 [2024-12-08 21:06:35.917016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:35.956921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:35.956967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.030 [2024-12-08 21:06:35.956983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.812 ms 00:20:15.030 [2024-12-08 21:06:35.956993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:35.957041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:35.957056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.030 [2024-12-08 21:06:35.957066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.030 [2024-12-08 21:06:35.957109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:35.957493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:35.957531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.030 [2024-12-08 21:06:35.957543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:15.030 [2024-12-08 21:06:35.957558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:35.957685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:35.957708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.030 [2024-12-08 21:06:35.957720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:15.030 [2024-12-08 21:06:35.957729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:35.971378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:35.971527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.030 [2024-12-08 21:06:35.971686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.624 ms 00:20:15.030 [2024-12-08 21:06:35.971732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:35.984536] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:15.030 [2024-12-08 21:06:35.984686] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:15.030 [2024-12-08 21:06:35.984707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:35.984718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:15.030 [2024-12-08 21:06:35.984729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.836 ms 00:20:15.030 [2024-12-08 21:06:35.984738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:36.007681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:36.007718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:15.030 [2024-12-08 21:06:36.007733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.879 ms 00:20:15.030 [2024-12-08 21:06:36.007743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:36.020091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:36.020260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:15.030 [2024-12-08 21:06:36.020285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.301 ms 00:20:15.030 [2024-12-08 21:06:36.020295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:36.032603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:36.032662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:15.030 [2024-12-08 21:06:36.032678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.239 ms 00:20:15.030 [2024-12-08 21:06:36.032686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.030 [2024-12-08 21:06:36.033045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.030 [2024-12-08 21:06:36.033064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:15.030 [2024-12-08 21:06:36.033111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:15.030 [2024-12-08 21:06:36.033125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.289 [2024-12-08 21:06:36.093558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.289 [2024-12-08 21:06:36.093615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:15.289 [2024-12-08 21:06:36.093632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.411 ms 00:20:15.289 [2024-12-08 21:06:36.093642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.289 [2024-12-08 21:06:36.103489] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:15.289 [2024-12-08 21:06:36.105409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.289 [2024-12-08 21:06:36.105439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:15.289 [2024-12-08 21:06:36.105469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.710 ms 00:20:15.289 [2024-12-08 21:06:36.105484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.289 [2024-12-08 21:06:36.105574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.289 [2024-12-08 21:06:36.105591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:15.289 [2024-12-08 21:06:36.105602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:15.289 [2024-12-08 21:06:36.105611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.289 [2024-12-08 21:06:36.105680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.289 [2024-12-08 21:06:36.105696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:15.289 [2024-12-08 21:06:36.105705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:15.289 [2024-12-08 21:06:36.105714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.290 [2024-12-08 21:06:36.107317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.290 [2024-12-08 21:06:36.107470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:15.290 [2024-12-08 21:06:36.107495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:20:15.290 [2024-12-08 21:06:36.107506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.290 [2024-12-08 21:06:36.107542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.290 [2024-12-08 21:06:36.107555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:15.290 [2024-12-08 21:06:36.107574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:15.290 [2024-12-08 21:06:36.107583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.290 [2024-12-08 21:06:36.107620] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:15.290 [2024-12-08 21:06:36.107635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.290 [2024-12-08 21:06:36.107649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:15.290 [2024-12-08 21:06:36.107659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:15.290 [2024-12-08 21:06:36.107669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.290 [2024-12-08 21:06:36.132069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.290 [2024-12-08 21:06:36.132115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:15.290 [2024-12-08 21:06:36.132145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.380 ms 00:20:15.290 [2024-12-08 21:06:36.132155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.290 [2024-12-08 21:06:36.132231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.290 [2024-12-08 21:06:36.132247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:15.290 [2024-12-08 21:06:36.132258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:15.290 [2024-12-08 21:06:36.132268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.290 [2024-12-08 21:06:36.133501] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 256.879 ms, result 0 00:20:16.227  [2024-12-08T21:06:38.208Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-08T21:06:39.587Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-08T21:06:40.155Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-08T21:06:41.533Z] Copying: 94/1024 [MB] (23 MBps) [2024-12-08T21:06:42.470Z] Copying: 118/1024 [MB] (23 MBps) [2024-12-08T21:06:43.407Z] Copying: 141/1024 [MB] (23 MBps) [2024-12-08T21:06:44.346Z] Copying: 165/1024 [MB] (23 MBps) [2024-12-08T21:06:45.285Z] Copying: 189/1024 [MB] (23 MBps) [2024-12-08T21:06:46.222Z] Copying: 213/1024 [MB] (24 MBps) [2024-12-08T21:06:47.158Z] Copying: 237/1024 [MB] (23 MBps) [2024-12-08T21:06:48.537Z] Copying: 261/1024 [MB] (23 MBps) [2024-12-08T21:06:49.474Z] Copying: 284/1024 [MB] (23 MBps) [2024-12-08T21:06:50.409Z] Copying: 308/1024 [MB] (23 MBps) [2024-12-08T21:06:51.354Z] Copying: 332/1024 [MB] (23 MBps) [2024-12-08T21:06:52.322Z] Copying: 356/1024 [MB] (23 MBps) [2024-12-08T21:06:53.257Z] Copying: 379/1024 [MB] (23 MBps) [2024-12-08T21:06:54.190Z] Copying: 402/1024 [MB] (23 MBps) [2024-12-08T21:06:55.568Z] Copying: 426/1024 [MB] (23 MBps) [2024-12-08T21:06:56.504Z] Copying: 450/1024 [MB] (23 MBps) [2024-12-08T21:06:57.441Z] Copying: 474/1024 [MB] (23 MBps) [2024-12-08T21:06:58.380Z] Copying: 498/1024 [MB] (24 MBps) [2024-12-08T21:06:59.319Z] Copying: 522/1024 [MB] (23 MBps) [2024-12-08T21:07:00.259Z] Copying: 546/1024 [MB] (23 MBps) [2024-12-08T21:07:01.206Z] Copying: 569/1024 [MB] (23 MBps) [2024-12-08T21:07:02.587Z] Copying: 593/1024 [MB] (23 MBps) [2024-12-08T21:07:03.155Z] Copying: 617/1024 [MB] (24 MBps) [2024-12-08T21:07:04.536Z] Copying: 641/1024 [MB] (24 MBps) [2024-12-08T21:07:05.473Z] Copying: 664/1024 [MB] (23 MBps) [2024-12-08T21:07:06.407Z] Copying: 688/1024 [MB] (23 MBps) [2024-12-08T21:07:07.343Z] Copying: 712/1024 [MB] (24 MBps) [2024-12-08T21:07:08.282Z] Copying: 736/1024 [MB] (23 MBps) [2024-12-08T21:07:09.220Z] Copying: 760/1024 [MB] (24 MBps) [2024-12-08T21:07:10.156Z] Copying: 784/1024 [MB] (23 MBps) [2024-12-08T21:07:11.534Z] Copying: 808/1024 [MB] (23 MBps) [2024-12-08T21:07:12.472Z] Copying: 833/1024 [MB] (24 MBps) [2024-12-08T21:07:13.412Z] Copying: 858/1024 [MB] (25 MBps) [2024-12-08T21:07:14.351Z] Copying: 882/1024 [MB] (24 MBps) [2024-12-08T21:07:15.298Z] Copying: 907/1024 [MB] (24 MBps) [2024-12-08T21:07:16.234Z] Copying: 931/1024 [MB] (23 MBps) [2024-12-08T21:07:17.171Z] Copying: 954/1024 [MB] (23 MBps) [2024-12-08T21:07:18.548Z] Copying: 978/1024 [MB] (24 MBps) [2024-12-08T21:07:19.487Z] Copying: 1002/1024 [MB] (23 MBps) [2024-12-08T21:07:20.427Z] Copying: 1023/1024 [MB] (20 MBps) [2024-12-08T21:07:20.427Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-08 21:07:20.076052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.076314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:59.384 [2024-12-08 21:07:20.076473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:59.384 [2024-12-08 21:07:20.076552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.078890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.384 [2024-12-08 21:07:20.085056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.085211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:59.384 [2024-12-08 21:07:20.085324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.851 ms 00:20:59.384 [2024-12-08 21:07:20.085368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.096529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.096681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:59.384 [2024-12-08 21:07:20.096715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.144 ms 00:20:59.384 [2024-12-08 21:07:20.096726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.117250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.117287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:59.384 [2024-12-08 21:07:20.117319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.499 ms 00:20:59.384 [2024-12-08 21:07:20.117329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.122605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.122634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:59.384 [2024-12-08 21:07:20.122647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.241 ms 00:20:59.384 [2024-12-08 21:07:20.122663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.147241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.147394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:59.384 [2024-12-08 21:07:20.147419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.534 ms 00:20:59.384 [2024-12-08 21:07:20.147430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.162137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.162320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:59.384 [2024-12-08 21:07:20.162345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.666 ms 00:20:59.384 [2024-12-08 21:07:20.162357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.264253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.264296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:59.384 [2024-12-08 21:07:20.264315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.837 ms 00:20:59.384 [2024-12-08 21:07:20.264327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.289256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.289292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:59.384 [2024-12-08 21:07:20.289323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.885 ms 00:20:59.384 [2024-12-08 21:07:20.289333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.313628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.313663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:59.384 [2024-12-08 21:07:20.313690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.259 ms 00:20:59.384 [2024-12-08 21:07:20.313699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.340663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.340825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:59.384 [2024-12-08 21:07:20.340849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.927 ms 00:20:59.384 [2024-12-08 21:07:20.340860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.367037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.384 [2024-12-08 21:07:20.367098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:59.384 [2024-12-08 21:07:20.367131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.083 ms 00:20:59.384 [2024-12-08 21:07:20.367141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.384 [2024-12-08 21:07:20.367179] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:59.384 [2024-12-08 21:07:20.367199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 117248 / 261120 wr_cnt: 1 state: open 00:20:59.384 [2024-12-08 21:07:20.367212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:59.384 [2024-12-08 21:07:20.367410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.367990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:59.385 [2024-12-08 21:07:20.368321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:59.385 [2024-12-08 21:07:20.368332] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97a1d5b0-5381-44ba-bfc8-56f53a684fe6 00:20:59.385 [2024-12-08 21:07:20.368342] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 117248 00:20:59.385 [2024-12-08 21:07:20.368352] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118208 00:20:59.385 [2024-12-08 21:07:20.368362] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 117248 00:20:59.385 [2024-12-08 21:07:20.368377] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:20:59.385 [2024-12-08 21:07:20.368387] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:59.385 [2024-12-08 21:07:20.368398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:59.385 [2024-12-08 21:07:20.368408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:59.385 [2024-12-08 21:07:20.368427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:59.386 [2024-12-08 21:07:20.368452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:59.386 [2024-12-08 21:07:20.368462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.386 [2024-12-08 21:07:20.368472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:59.386 [2024-12-08 21:07:20.368482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:20:59.386 [2024-12-08 21:07:20.368492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.386 [2024-12-08 21:07:20.382436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.386 [2024-12-08 21:07:20.382471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:59.386 [2024-12-08 21:07:20.382485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.908 ms 00:20:59.386 [2024-12-08 21:07:20.382495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.386 [2024-12-08 21:07:20.382689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.386 [2024-12-08 21:07:20.382704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:59.386 [2024-12-08 21:07:20.382714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:20:59.386 [2024-12-08 21:07:20.382723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.386 [2024-12-08 21:07:20.419430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.386 [2024-12-08 21:07:20.419466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:59.386 [2024-12-08 21:07:20.419480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.386 [2024-12-08 21:07:20.419489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.386 [2024-12-08 21:07:20.419537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.386 [2024-12-08 21:07:20.419550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:59.386 [2024-12-08 21:07:20.419560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.386 [2024-12-08 21:07:20.419569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.386 [2024-12-08 21:07:20.419640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.386 [2024-12-08 21:07:20.419662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:59.386 [2024-12-08 21:07:20.419672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.386 [2024-12-08 21:07:20.419680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.386 [2024-12-08 21:07:20.419699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.386 [2024-12-08 21:07:20.419709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:59.386 [2024-12-08 21:07:20.419718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.386 [2024-12-08 21:07:20.419726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.645 [2024-12-08 21:07:20.499009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.645 [2024-12-08 21:07:20.499240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:59.645 [2024-12-08 21:07:20.499268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.645 [2024-12-08 21:07:20.499281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.645 [2024-12-08 21:07:20.530678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.645 [2024-12-08 21:07:20.530710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.645 [2024-12-08 21:07:20.530724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.645 [2024-12-08 21:07:20.530734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.645 [2024-12-08 21:07:20.530798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.645 [2024-12-08 21:07:20.530814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.645 [2024-12-08 21:07:20.530830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.645 [2024-12-08 21:07:20.530839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.645 [2024-12-08 21:07:20.530883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.645 [2024-12-08 21:07:20.530897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.645 [2024-12-08 21:07:20.530907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.645 [2024-12-08 21:07:20.530915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.645 [2024-12-08 21:07:20.531020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.645 [2024-12-08 21:07:20.531037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.645 [2024-12-08 21:07:20.531047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.645 [2024-12-08 21:07:20.531062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.646 [2024-12-08 21:07:20.531166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.646 [2024-12-08 21:07:20.531185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:59.646 [2024-12-08 21:07:20.531196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.646 [2024-12-08 21:07:20.531221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.646 [2024-12-08 21:07:20.531276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.646 [2024-12-08 21:07:20.531290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.646 [2024-12-08 21:07:20.531301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.646 [2024-12-08 21:07:20.531317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.646 [2024-12-08 21:07:20.531365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.646 [2024-12-08 21:07:20.531380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.646 [2024-12-08 21:07:20.531391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.646 [2024-12-08 21:07:20.531401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.646 [2024-12-08 21:07:20.531593] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 458.175 ms, result 0 00:21:01.025 00:21:01.025 00:21:01.025 21:07:22 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:01.285 [2024-12-08 21:07:22.094111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:01.285 [2024-12-08 21:07:22.094278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75081 ] 00:21:01.285 [2024-12-08 21:07:22.262570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.544 [2024-12-08 21:07:22.408448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.861 [2024-12-08 21:07:22.656053] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.861 [2024-12-08 21:07:22.656152] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.861 [2024-12-08 21:07:22.806027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.861 [2024-12-08 21:07:22.806121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:01.861 [2024-12-08 21:07:22.806142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:01.861 [2024-12-08 21:07:22.806158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.861 [2024-12-08 21:07:22.806232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.861 [2024-12-08 21:07:22.806250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.861 [2024-12-08 21:07:22.806262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:01.861 [2024-12-08 21:07:22.806272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.861 [2024-12-08 21:07:22.806300] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:01.861 [2024-12-08 21:07:22.807183] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:01.861 [2024-12-08 21:07:22.807212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.861 [2024-12-08 21:07:22.807224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.861 [2024-12-08 21:07:22.807235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:21:01.861 [2024-12-08 21:07:22.807245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.861 [2024-12-08 21:07:22.808383] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:01.861 [2024-12-08 21:07:22.822216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.861 [2024-12-08 21:07:22.822256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:01.861 [2024-12-08 21:07:22.822272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.835 ms 00:21:01.861 [2024-12-08 21:07:22.822283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.861 [2024-12-08 21:07:22.822344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.861 [2024-12-08 21:07:22.822369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:01.861 [2024-12-08 21:07:22.822381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:01.861 [2024-12-08 21:07:22.822391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.861 [2024-12-08 21:07:22.826848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.861 [2024-12-08 21:07:22.826885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.861 [2024-12-08 21:07:22.826899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.358 ms 00:21:01.861 [2024-12-08 21:07:22.826909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.861 [2024-12-08 21:07:22.826999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.861 [2024-12-08 21:07:22.827017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.861 [2024-12-08 21:07:22.827028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:01.861 [2024-12-08 21:07:22.827037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.861 [2024-12-08 21:07:22.827137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.861 [2024-12-08 21:07:22.827155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:01.861 [2024-12-08 21:07:22.827166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:01.861 [2024-12-08 21:07:22.827176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.861 [2024-12-08 21:07:22.827217] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:01.861 [2024-12-08 21:07:22.831025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.861 [2024-12-08 21:07:22.831061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.861 [2024-12-08 21:07:22.831118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:21:01.861 [2024-12-08 21:07:22.831131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.861 [2024-12-08 21:07:22.831170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.861 [2024-12-08 21:07:22.831184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:01.861 [2024-12-08 21:07:22.831206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:01.861 [2024-12-08 21:07:22.831222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.861 [2024-12-08 21:07:22.831262] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:01.861 [2024-12-08 21:07:22.831289] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:01.861 [2024-12-08 21:07:22.831335] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:01.861 [2024-12-08 21:07:22.831354] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:01.861 [2024-12-08 21:07:22.831458] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:01.861 [2024-12-08 21:07:22.831487] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:01.862 [2024-12-08 21:07:22.831505] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:01.862 [2024-12-08 21:07:22.831518] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:01.862 [2024-12-08 21:07:22.831531] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:01.862 [2024-12-08 21:07:22.831542] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:01.862 [2024-12-08 21:07:22.831560] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:01.862 [2024-12-08 21:07:22.831570] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:01.862 [2024-12-08 21:07:22.831580] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:01.862 [2024-12-08 21:07:22.831591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.862 [2024-12-08 21:07:22.831601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:01.862 [2024-12-08 21:07:22.831612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:21:01.862 [2024-12-08 21:07:22.831622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.862 [2024-12-08 21:07:22.831696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.862 [2024-12-08 21:07:22.831710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:01.862 [2024-12-08 21:07:22.831721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:01.862 [2024-12-08 21:07:22.831731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.862 [2024-12-08 21:07:22.831803] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:01.862 [2024-12-08 21:07:22.831817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:01.862 [2024-12-08 21:07:22.831828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.862 [2024-12-08 21:07:22.831839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.862 [2024-12-08 21:07:22.831850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:01.862 [2024-12-08 21:07:22.831859] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:01.862 [2024-12-08 21:07:22.831869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:01.862 [2024-12-08 21:07:22.831878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:01.862 [2024-12-08 21:07:22.831888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:01.862 [2024-12-08 21:07:22.831899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.862 [2024-12-08 21:07:22.831908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:01.862 [2024-12-08 21:07:22.831917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:01.862 [2024-12-08 21:07:22.831927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.862 [2024-12-08 21:07:22.831936] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:01.862 [2024-12-08 21:07:22.831945] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:01.862 [2024-12-08 21:07:22.831954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.862 [2024-12-08 21:07:22.831975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:01.862 [2024-12-08 21:07:22.831985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:01.862 [2024-12-08 21:07:22.831994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.862 [2024-12-08 21:07:22.832004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:01.862 [2024-12-08 21:07:22.832013] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:01.862 [2024-12-08 21:07:22.832022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:01.862 [2024-12-08 21:07:22.832032] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:01.862 [2024-12-08 21:07:22.832041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:01.862 [2024-12-08 21:07:22.832050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:01.862 [2024-12-08 21:07:22.832059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:01.862 [2024-12-08 21:07:22.832069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:01.862 [2024-12-08 21:07:22.832078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:01.862 [2024-12-08 21:07:22.832087] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:01.862 [2024-12-08 21:07:22.832096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:01.862 [2024-12-08 21:07:22.832156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:01.862 [2024-12-08 21:07:22.832168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:01.862 [2024-12-08 21:07:22.832178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:01.862 [2024-12-08 21:07:22.832187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:01.862 [2024-12-08 21:07:22.832198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:01.862 [2024-12-08 21:07:22.832208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:01.862 [2024-12-08 21:07:22.832217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.862 [2024-12-08 21:07:22.832227] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:01.862 [2024-12-08 21:07:22.832237] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:01.862 [2024-12-08 21:07:22.832246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.862 [2024-12-08 21:07:22.832256] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:01.862 [2024-12-08 21:07:22.832273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:01.862 [2024-12-08 21:07:22.832283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.862 [2024-12-08 21:07:22.832294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.862 [2024-12-08 21:07:22.832304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:01.862 [2024-12-08 21:07:22.832314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:01.862 [2024-12-08 21:07:22.832323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:01.862 [2024-12-08 21:07:22.832333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:01.862 [2024-12-08 21:07:22.832346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:01.862 [2024-12-08 21:07:22.832356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:01.862 [2024-12-08 21:07:22.832366] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:01.862 [2024-12-08 21:07:22.832379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.862 [2024-12-08 21:07:22.832391] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:01.862 [2024-12-08 21:07:22.832401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:01.862 [2024-12-08 21:07:22.832412] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:01.862 [2024-12-08 21:07:22.832422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:01.862 [2024-12-08 21:07:22.832432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:01.862 [2024-12-08 21:07:22.832442] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:01.862 [2024-12-08 21:07:22.832453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:01.862 [2024-12-08 21:07:22.832463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:01.862 [2024-12-08 21:07:22.832487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:01.862 [2024-12-08 21:07:22.832497] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:01.862 [2024-12-08 21:07:22.832507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:01.862 [2024-12-08 21:07:22.832518] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:01.862 [2024-12-08 21:07:22.832528] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:01.862 [2024-12-08 21:07:22.832538] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:01.862 [2024-12-08 21:07:22.832550] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.862 [2024-12-08 21:07:22.832561] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:01.862 [2024-12-08 21:07:22.832571] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:01.862 [2024-12-08 21:07:22.832582] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:01.862 [2024-12-08 21:07:22.832592] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:01.862 [2024-12-08 21:07:22.832603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.862 [2024-12-08 21:07:22.832615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:01.862 [2024-12-08 21:07:22.832625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:21:01.862 [2024-12-08 21:07:22.832635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.862 [2024-12-08 21:07:22.850495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.862 [2024-12-08 21:07:22.850674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.862 [2024-12-08 21:07:22.850719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.784 ms 00:21:01.862 [2024-12-08 21:07:22.850747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.862 [2024-12-08 21:07:22.850856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.862 [2024-12-08 21:07:22.850873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:01.862 [2024-12-08 21:07:22.850886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:01.862 [2024-12-08 21:07:22.850897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:22.900886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:22.901114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.157 [2024-12-08 21:07:22.901143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.912 ms 00:21:02.157 [2024-12-08 21:07:22.901155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:22.901217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:22.901234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.157 [2024-12-08 21:07:22.901246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:02.157 [2024-12-08 21:07:22.901256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:22.901658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:22.901676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.157 [2024-12-08 21:07:22.901688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:21:02.157 [2024-12-08 21:07:22.901703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:22.901830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:22.901846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.157 [2024-12-08 21:07:22.901856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:02.157 [2024-12-08 21:07:22.901866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:22.918623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:22.918686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.157 [2024-12-08 21:07:22.918711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.733 ms 00:21:02.157 [2024-12-08 21:07:22.918723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:22.932878] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:02.157 [2024-12-08 21:07:22.933064] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:02.157 [2024-12-08 21:07:22.933119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:22.933131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:02.157 [2024-12-08 21:07:22.933145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.246 ms 00:21:02.157 [2024-12-08 21:07:22.933156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:22.958163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:22.958200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:02.157 [2024-12-08 21:07:22.958232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.964 ms 00:21:02.157 [2024-12-08 21:07:22.958242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:22.971543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:22.971580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:02.157 [2024-12-08 21:07:22.971594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.259 ms 00:21:02.157 [2024-12-08 21:07:22.971604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:22.984031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:22.984066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:02.157 [2024-12-08 21:07:22.984146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.389 ms 00:21:02.157 [2024-12-08 21:07:22.984157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:22.984657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:22.984682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:02.157 [2024-12-08 21:07:22.984694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:21:02.157 [2024-12-08 21:07:22.984705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:23.043891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-12-08 21:07:23.043949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:02.157 [2024-12-08 21:07:23.043967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.164 ms 00:21:02.157 [2024-12-08 21:07:23.043978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-12-08 21:07:23.054066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:02.157 [2024-12-08 21:07:23.055988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-12-08 21:07:23.056020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.158 [2024-12-08 21:07:23.056034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.961 ms 00:21:02.158 [2024-12-08 21:07:23.056049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-12-08 21:07:23.056200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-12-08 21:07:23.056253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:02.158 [2024-12-08 21:07:23.056267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:02.158 [2024-12-08 21:07:23.056278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-12-08 21:07:23.057333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-12-08 21:07:23.057358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:02.158 [2024-12-08 21:07:23.057370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:21:02.158 [2024-12-08 21:07:23.057379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-12-08 21:07:23.058938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-12-08 21:07:23.058971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:02.158 [2024-12-08 21:07:23.058984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:21:02.158 [2024-12-08 21:07:23.058993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-12-08 21:07:23.059024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-12-08 21:07:23.059036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:02.158 [2024-12-08 21:07:23.059052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:02.158 [2024-12-08 21:07:23.059062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-12-08 21:07:23.059147] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:02.158 [2024-12-08 21:07:23.059165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-12-08 21:07:23.059180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:02.158 [2024-12-08 21:07:23.059191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:02.158 [2024-12-08 21:07:23.059201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-12-08 21:07:23.084124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-12-08 21:07:23.084179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:02.158 [2024-12-08 21:07:23.084211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.865 ms 00:21:02.158 [2024-12-08 21:07:23.084223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-12-08 21:07:23.084302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-12-08 21:07:23.084319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:02.158 [2024-12-08 21:07:23.084331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:02.158 [2024-12-08 21:07:23.084341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-12-08 21:07:23.091036] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 283.338 ms, result 0 00:21:03.539  [2024-12-08T21:07:25.552Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-08T21:07:26.486Z] Copying: 44/1024 [MB] (22 MBps) [2024-12-08T21:07:27.423Z] Copying: 66/1024 [MB] (22 MBps) [2024-12-08T21:07:28.361Z] Copying: 89/1024 [MB] (22 MBps) [2024-12-08T21:07:29.301Z] Copying: 111/1024 [MB] (22 MBps) [2024-12-08T21:07:30.682Z] Copying: 133/1024 [MB] (22 MBps) [2024-12-08T21:07:31.620Z] Copying: 155/1024 [MB] (21 MBps) [2024-12-08T21:07:32.558Z] Copying: 178/1024 [MB] (22 MBps) [2024-12-08T21:07:33.496Z] Copying: 201/1024 [MB] (22 MBps) [2024-12-08T21:07:34.434Z] Copying: 223/1024 [MB] (22 MBps) [2024-12-08T21:07:35.370Z] Copying: 246/1024 [MB] (22 MBps) [2024-12-08T21:07:36.305Z] Copying: 268/1024 [MB] (22 MBps) [2024-12-08T21:07:37.678Z] Copying: 290/1024 [MB] (21 MBps) [2024-12-08T21:07:38.610Z] Copying: 312/1024 [MB] (22 MBps) [2024-12-08T21:07:39.554Z] Copying: 335/1024 [MB] (22 MBps) [2024-12-08T21:07:40.486Z] Copying: 358/1024 [MB] (22 MBps) [2024-12-08T21:07:41.422Z] Copying: 381/1024 [MB] (23 MBps) [2024-12-08T21:07:42.359Z] Copying: 403/1024 [MB] (21 MBps) [2024-12-08T21:07:43.296Z] Copying: 426/1024 [MB] (22 MBps) [2024-12-08T21:07:44.674Z] Copying: 448/1024 [MB] (22 MBps) [2024-12-08T21:07:45.611Z] Copying: 470/1024 [MB] (21 MBps) [2024-12-08T21:07:46.546Z] Copying: 492/1024 [MB] (22 MBps) [2024-12-08T21:07:47.485Z] Copying: 515/1024 [MB] (23 MBps) [2024-12-08T21:07:48.422Z] Copying: 538/1024 [MB] (22 MBps) [2024-12-08T21:07:49.360Z] Copying: 561/1024 [MB] (22 MBps) [2024-12-08T21:07:50.297Z] Copying: 584/1024 [MB] (22 MBps) [2024-12-08T21:07:51.688Z] Copying: 606/1024 [MB] (22 MBps) [2024-12-08T21:07:52.625Z] Copying: 628/1024 [MB] (21 MBps) [2024-12-08T21:07:53.562Z] Copying: 651/1024 [MB] (22 MBps) [2024-12-08T21:07:54.524Z] Copying: 673/1024 [MB] (22 MBps) [2024-12-08T21:07:55.473Z] Copying: 695/1024 [MB] (22 MBps) [2024-12-08T21:07:56.408Z] Copying: 717/1024 [MB] (22 MBps) [2024-12-08T21:07:57.366Z] Copying: 740/1024 [MB] (22 MBps) [2024-12-08T21:07:58.301Z] Copying: 763/1024 [MB] (23 MBps) [2024-12-08T21:07:59.679Z] Copying: 786/1024 [MB] (22 MBps) [2024-12-08T21:08:00.616Z] Copying: 809/1024 [MB] (22 MBps) [2024-12-08T21:08:01.551Z] Copying: 831/1024 [MB] (22 MBps) [2024-12-08T21:08:02.488Z] Copying: 854/1024 [MB] (22 MBps) [2024-12-08T21:08:03.424Z] Copying: 876/1024 [MB] (22 MBps) [2024-12-08T21:08:04.362Z] Copying: 898/1024 [MB] (22 MBps) [2024-12-08T21:08:05.307Z] Copying: 921/1024 [MB] (22 MBps) [2024-12-08T21:08:06.686Z] Copying: 943/1024 [MB] (22 MBps) [2024-12-08T21:08:07.621Z] Copying: 966/1024 [MB] (22 MBps) [2024-12-08T21:08:08.557Z] Copying: 989/1024 [MB] (23 MBps) [2024-12-08T21:08:08.817Z] Copying: 1012/1024 [MB] (22 MBps) [2024-12-08T21:08:09.076Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-08 21:08:08.907501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.033 [2024-12-08 21:08:08.907600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:48.033 [2024-12-08 21:08:08.907653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:48.033 [2024-12-08 21:08:08.907665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.033 [2024-12-08 21:08:08.907697] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:48.033 [2024-12-08 21:08:08.913860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.033 [2024-12-08 21:08:08.913907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:48.033 [2024-12-08 21:08:08.913927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.124 ms 00:21:48.033 [2024-12-08 21:08:08.913943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.033 [2024-12-08 21:08:08.914320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.033 [2024-12-08 21:08:08.914355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:48.033 [2024-12-08 21:08:08.914381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:21:48.033 [2024-12-08 21:08:08.914397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.033 [2024-12-08 21:08:08.919864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.033 [2024-12-08 21:08:08.920093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:48.033 [2024-12-08 21:08:08.920146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.440 ms 00:21:48.033 [2024-12-08 21:08:08.920164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.033 [2024-12-08 21:08:08.925885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.033 [2024-12-08 21:08:08.925911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:48.033 [2024-12-08 21:08:08.925924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.661 ms 00:21:48.033 [2024-12-08 21:08:08.925941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.033 [2024-12-08 21:08:08.951461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.033 [2024-12-08 21:08:08.951507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:48.033 [2024-12-08 21:08:08.951520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.470 ms 00:21:48.033 [2024-12-08 21:08:08.951530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.033 [2024-12-08 21:08:08.966391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.033 [2024-12-08 21:08:08.966578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:48.033 [2024-12-08 21:08:08.966605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.821 ms 00:21:48.033 [2024-12-08 21:08:08.966617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.293 [2024-12-08 21:08:09.088841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.293 [2024-12-08 21:08:09.088877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:48.293 [2024-12-08 21:08:09.088893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 122.195 ms 00:21:48.293 [2024-12-08 21:08:09.088905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.293 [2024-12-08 21:08:09.114579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.293 [2024-12-08 21:08:09.114609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:48.293 [2024-12-08 21:08:09.114622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.646 ms 00:21:48.293 [2024-12-08 21:08:09.114632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.293 [2024-12-08 21:08:09.139655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.293 [2024-12-08 21:08:09.139683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:48.293 [2024-12-08 21:08:09.139696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.986 ms 00:21:48.293 [2024-12-08 21:08:09.139720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.294 [2024-12-08 21:08:09.164107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.294 [2024-12-08 21:08:09.164152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:48.294 [2024-12-08 21:08:09.164166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.350 ms 00:21:48.294 [2024-12-08 21:08:09.164176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.294 [2024-12-08 21:08:09.188756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.294 [2024-12-08 21:08:09.188937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:48.294 [2024-12-08 21:08:09.188963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.502 ms 00:21:48.294 [2024-12-08 21:08:09.188974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.294 [2024-12-08 21:08:09.189015] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:48.294 [2024-12-08 21:08:09.189036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:21:48.294 [2024-12-08 21:08:09.189049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:48.294 [2024-12-08 21:08:09.189911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.189920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.189930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.189939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.189949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.189958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.189968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.189977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.189988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.189997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:48.295 [2024-12-08 21:08:09.190155] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:48.295 [2024-12-08 21:08:09.190175] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97a1d5b0-5381-44ba-bfc8-56f53a684fe6 00:21:48.295 [2024-12-08 21:08:09.190185] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:21:48.295 [2024-12-08 21:08:09.190195] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 17344 00:21:48.295 [2024-12-08 21:08:09.190204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 16384 00:21:48.295 [2024-12-08 21:08:09.190221] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0586 00:21:48.295 [2024-12-08 21:08:09.190231] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:48.295 [2024-12-08 21:08:09.190240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:48.295 [2024-12-08 21:08:09.190249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:48.295 [2024-12-08 21:08:09.190258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:48.295 [2024-12-08 21:08:09.190275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:48.295 [2024-12-08 21:08:09.190286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.295 [2024-12-08 21:08:09.190296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:48.295 [2024-12-08 21:08:09.190306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:21:48.295 [2024-12-08 21:08:09.190315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.295 [2024-12-08 21:08:09.203354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.295 [2024-12-08 21:08:09.203526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:48.295 [2024-12-08 21:08:09.203551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.004 ms 00:21:48.295 [2024-12-08 21:08:09.203562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.295 [2024-12-08 21:08:09.203782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.295 [2024-12-08 21:08:09.203800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:48.295 [2024-12-08 21:08:09.203811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:21:48.295 [2024-12-08 21:08:09.203821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.295 [2024-12-08 21:08:09.240229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.295 [2024-12-08 21:08:09.240261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.295 [2024-12-08 21:08:09.240274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.295 [2024-12-08 21:08:09.240285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.295 [2024-12-08 21:08:09.240335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.295 [2024-12-08 21:08:09.240349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.295 [2024-12-08 21:08:09.240359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.295 [2024-12-08 21:08:09.240369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.295 [2024-12-08 21:08:09.240463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.295 [2024-12-08 21:08:09.240485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.295 [2024-12-08 21:08:09.240496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.295 [2024-12-08 21:08:09.240506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.295 [2024-12-08 21:08:09.240539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.295 [2024-12-08 21:08:09.240550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.295 [2024-12-08 21:08:09.240560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.295 [2024-12-08 21:08:09.240569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.295 [2024-12-08 21:08:09.317436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.295 [2024-12-08 21:08:09.317680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.295 [2024-12-08 21:08:09.317782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.295 [2024-12-08 21:08:09.317827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.554 [2024-12-08 21:08:09.350046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.554 [2024-12-08 21:08:09.350260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.555 [2024-12-08 21:08:09.350392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.555 [2024-12-08 21:08:09.350444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.555 [2024-12-08 21:08:09.350702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.555 [2024-12-08 21:08:09.350751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.555 [2024-12-08 21:08:09.350795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.555 [2024-12-08 21:08:09.350810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.555 [2024-12-08 21:08:09.350867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.555 [2024-12-08 21:08:09.350882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.555 [2024-12-08 21:08:09.350893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.555 [2024-12-08 21:08:09.350903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.555 [2024-12-08 21:08:09.351016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.555 [2024-12-08 21:08:09.351035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.555 [2024-12-08 21:08:09.351046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.555 [2024-12-08 21:08:09.351062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.555 [2024-12-08 21:08:09.351122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.555 [2024-12-08 21:08:09.351140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:48.555 [2024-12-08 21:08:09.351152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.555 [2024-12-08 21:08:09.351177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.555 [2024-12-08 21:08:09.351216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.555 [2024-12-08 21:08:09.351229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.555 [2024-12-08 21:08:09.351239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.555 [2024-12-08 21:08:09.351254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.555 [2024-12-08 21:08:09.351301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.555 [2024-12-08 21:08:09.351315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.555 [2024-12-08 21:08:09.351331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.555 [2024-12-08 21:08:09.351341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.555 [2024-12-08 21:08:09.351508] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 443.968 ms, result 0 00:21:49.489 00:21:49.489 00:21:49.489 21:08:10 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:51.385 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:51.385 21:08:12 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:51.385 21:08:12 -- ftl/restore.sh@85 -- # restore_kill 00:21:51.385 21:08:12 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:51.385 21:08:12 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:51.385 21:08:12 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:51.385 21:08:12 -- ftl/restore.sh@32 -- # killprocess 73400 00:21:51.385 21:08:12 -- common/autotest_common.sh@936 -- # '[' -z 73400 ']' 00:21:51.385 21:08:12 -- common/autotest_common.sh@940 -- # kill -0 73400 00:21:51.385 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (73400) - No such process 00:21:51.385 21:08:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 73400 is not found' 00:21:51.385 Process with pid 73400 is not found 00:21:51.385 Remove shared memory files 00:21:51.385 21:08:12 -- ftl/restore.sh@33 -- # remove_shm 00:21:51.385 21:08:12 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:51.386 21:08:12 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:51.386 21:08:12 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:51.386 21:08:12 -- ftl/common.sh@207 -- # rm -f rm -f 00:21:51.386 21:08:12 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:51.386 21:08:12 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:51.386 ************************************ 00:21:51.386 END TEST ftl_restore 00:21:51.386 ************************************ 00:21:51.386 00:21:51.386 real 3m31.047s 00:21:51.386 user 3m17.344s 00:21:51.386 sys 0m14.463s 00:21:51.386 21:08:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:51.386 21:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:51.386 21:08:12 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:51.386 21:08:12 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:51.386 21:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:51.386 21:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:51.386 ************************************ 00:21:51.386 START TEST ftl_dirty_shutdown 00:21:51.386 ************************************ 00:21:51.386 21:08:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:51.386 * Looking for test storage... 00:21:51.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.386 21:08:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:51.386 21:08:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:51.386 21:08:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:51.643 21:08:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:51.643 21:08:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:51.643 21:08:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:51.643 21:08:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:51.643 21:08:12 -- scripts/common.sh@335 -- # IFS=.-: 00:21:51.643 21:08:12 -- scripts/common.sh@335 -- # read -ra ver1 00:21:51.643 21:08:12 -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.643 21:08:12 -- scripts/common.sh@336 -- # read -ra ver2 00:21:51.643 21:08:12 -- scripts/common.sh@337 -- # local 'op=<' 00:21:51.643 21:08:12 -- scripts/common.sh@339 -- # ver1_l=2 00:21:51.643 21:08:12 -- scripts/common.sh@340 -- # ver2_l=1 00:21:51.643 21:08:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:51.643 21:08:12 -- scripts/common.sh@343 -- # case "$op" in 00:21:51.643 21:08:12 -- scripts/common.sh@344 -- # : 1 00:21:51.643 21:08:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:51.643 21:08:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.643 21:08:12 -- scripts/common.sh@364 -- # decimal 1 00:21:51.643 21:08:12 -- scripts/common.sh@352 -- # local d=1 00:21:51.643 21:08:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.643 21:08:12 -- scripts/common.sh@354 -- # echo 1 00:21:51.643 21:08:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:51.643 21:08:12 -- scripts/common.sh@365 -- # decimal 2 00:21:51.643 21:08:12 -- scripts/common.sh@352 -- # local d=2 00:21:51.643 21:08:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.643 21:08:12 -- scripts/common.sh@354 -- # echo 2 00:21:51.643 21:08:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:51.643 21:08:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:51.643 21:08:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:51.643 21:08:12 -- scripts/common.sh@367 -- # return 0 00:21:51.643 21:08:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.643 21:08:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:51.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.643 --rc genhtml_branch_coverage=1 00:21:51.643 --rc genhtml_function_coverage=1 00:21:51.643 --rc genhtml_legend=1 00:21:51.643 --rc geninfo_all_blocks=1 00:21:51.643 --rc geninfo_unexecuted_blocks=1 00:21:51.643 00:21:51.643 ' 00:21:51.643 21:08:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:51.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.643 --rc genhtml_branch_coverage=1 00:21:51.643 --rc genhtml_function_coverage=1 00:21:51.643 --rc genhtml_legend=1 00:21:51.643 --rc geninfo_all_blocks=1 00:21:51.643 --rc geninfo_unexecuted_blocks=1 00:21:51.643 00:21:51.643 ' 00:21:51.643 21:08:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:51.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.643 --rc genhtml_branch_coverage=1 00:21:51.643 --rc genhtml_function_coverage=1 00:21:51.643 --rc genhtml_legend=1 00:21:51.643 --rc geninfo_all_blocks=1 00:21:51.643 --rc geninfo_unexecuted_blocks=1 00:21:51.643 00:21:51.643 ' 00:21:51.643 21:08:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:51.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.643 --rc genhtml_branch_coverage=1 00:21:51.643 --rc genhtml_function_coverage=1 00:21:51.643 --rc genhtml_legend=1 00:21:51.643 --rc geninfo_all_blocks=1 00:21:51.643 --rc geninfo_unexecuted_blocks=1 00:21:51.643 00:21:51.643 ' 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:51.643 21:08:12 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:51.643 21:08:12 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.643 21:08:12 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.643 21:08:12 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:51.643 21:08:12 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:51.643 21:08:12 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:51.643 21:08:12 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:51.643 21:08:12 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:51.643 21:08:12 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.643 21:08:12 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.643 21:08:12 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:51.643 21:08:12 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:51.643 21:08:12 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:51.643 21:08:12 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:51.643 21:08:12 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:51.643 21:08:12 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:51.643 21:08:12 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.643 21:08:12 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.643 21:08:12 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:51.643 21:08:12 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:51.643 21:08:12 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:51.643 21:08:12 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:51.643 21:08:12 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:51.643 21:08:12 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:51.643 21:08:12 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:51.643 21:08:12 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:51.643 21:08:12 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:51.643 21:08:12 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75651 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75651 00:21:51.643 21:08:12 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:51.643 21:08:12 -- common/autotest_common.sh@829 -- # '[' -z 75651 ']' 00:21:51.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.643 21:08:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.643 21:08:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.643 21:08:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.643 21:08:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.643 21:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:51.643 [2024-12-08 21:08:12.594379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:51.643 [2024-12-08 21:08:12.594527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75651 ] 00:21:51.900 [2024-12-08 21:08:12.747197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.901 [2024-12-08 21:08:12.898088] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:51.901 [2024-12-08 21:08:12.898303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.831 21:08:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.831 21:08:13 -- common/autotest_common.sh@862 -- # return 0 00:21:52.831 21:08:13 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:21:52.831 21:08:13 -- ftl/common.sh@54 -- # local name=nvme0 00:21:52.831 21:08:13 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:21:52.831 21:08:13 -- ftl/common.sh@56 -- # local size=103424 00:21:52.831 21:08:13 -- ftl/common.sh@59 -- # local base_bdev 00:21:52.831 21:08:13 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:21:52.831 21:08:13 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:52.831 21:08:13 -- ftl/common.sh@62 -- # local base_size 00:21:52.831 21:08:13 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:52.831 21:08:13 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:21:52.831 21:08:13 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:52.831 21:08:13 -- common/autotest_common.sh@1369 -- # local bs 00:21:52.831 21:08:13 -- common/autotest_common.sh@1370 -- # local nb 00:21:52.831 21:08:13 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:53.089 21:08:14 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:53.089 { 00:21:53.089 "name": "nvme0n1", 00:21:53.089 "aliases": [ 00:21:53.089 "b29f950d-4fa2-4b7e-8844-55a5dbfcf9d5" 00:21:53.089 ], 00:21:53.089 "product_name": "NVMe disk", 00:21:53.089 "block_size": 4096, 00:21:53.089 "num_blocks": 1310720, 00:21:53.089 "uuid": "b29f950d-4fa2-4b7e-8844-55a5dbfcf9d5", 00:21:53.089 "assigned_rate_limits": { 00:21:53.089 "rw_ios_per_sec": 0, 00:21:53.089 "rw_mbytes_per_sec": 0, 00:21:53.089 "r_mbytes_per_sec": 0, 00:21:53.089 "w_mbytes_per_sec": 0 00:21:53.089 }, 00:21:53.089 "claimed": true, 00:21:53.089 "claim_type": "read_many_write_one", 00:21:53.089 "zoned": false, 00:21:53.089 "supported_io_types": { 00:21:53.089 "read": true, 00:21:53.089 "write": true, 00:21:53.089 "unmap": true, 00:21:53.089 "write_zeroes": true, 00:21:53.089 "flush": true, 00:21:53.089 "reset": true, 00:21:53.089 "compare": true, 00:21:53.089 "compare_and_write": false, 00:21:53.089 "abort": true, 00:21:53.089 "nvme_admin": true, 00:21:53.089 "nvme_io": true 00:21:53.089 }, 00:21:53.089 "driver_specific": { 00:21:53.089 "nvme": [ 00:21:53.089 { 00:21:53.089 "pci_address": "0000:00:07.0", 00:21:53.089 "trid": { 00:21:53.089 "trtype": "PCIe", 00:21:53.089 "traddr": "0000:00:07.0" 00:21:53.089 }, 00:21:53.089 "ctrlr_data": { 00:21:53.089 "cntlid": 0, 00:21:53.089 "vendor_id": "0x1b36", 00:21:53.089 "model_number": "QEMU NVMe Ctrl", 00:21:53.089 "serial_number": "12341", 00:21:53.089 "firmware_revision": "8.0.0", 00:21:53.089 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:53.089 "oacs": { 00:21:53.089 "security": 0, 00:21:53.089 "format": 1, 00:21:53.089 "firmware": 0, 00:21:53.089 "ns_manage": 1 00:21:53.089 }, 00:21:53.089 "multi_ctrlr": false, 00:21:53.089 "ana_reporting": false 00:21:53.089 }, 00:21:53.089 "vs": { 00:21:53.089 "nvme_version": "1.4" 00:21:53.089 }, 00:21:53.089 "ns_data": { 00:21:53.089 "id": 1, 00:21:53.089 "can_share": false 00:21:53.089 } 00:21:53.089 } 00:21:53.089 ], 00:21:53.089 "mp_policy": "active_passive" 00:21:53.089 } 00:21:53.089 } 00:21:53.089 ]' 00:21:53.089 21:08:14 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:53.089 21:08:14 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:53.089 21:08:14 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:53.347 21:08:14 -- common/autotest_common.sh@1373 -- # nb=1310720 00:21:53.347 21:08:14 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:21:53.347 21:08:14 -- common/autotest_common.sh@1377 -- # echo 5120 00:21:53.347 21:08:14 -- ftl/common.sh@63 -- # base_size=5120 00:21:53.347 21:08:14 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:53.347 21:08:14 -- ftl/common.sh@67 -- # clear_lvols 00:21:53.347 21:08:14 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:53.347 21:08:14 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:53.347 21:08:14 -- ftl/common.sh@28 -- # stores=27b03bc3-ca96-47d2-a8bc-54dbb347c3a8 00:21:53.347 21:08:14 -- ftl/common.sh@29 -- # for lvs in $stores 00:21:53.347 21:08:14 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27b03bc3-ca96-47d2-a8bc-54dbb347c3a8 00:21:53.605 21:08:14 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:53.864 21:08:14 -- ftl/common.sh@68 -- # lvs=f626c195-8bfd-4aa9-a4a6-5a600f9a637d 00:21:53.864 21:08:14 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f626c195-8bfd-4aa9-a4a6-5a600f9a637d 00:21:54.124 21:08:15 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:54.124 21:08:15 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:21:54.124 21:08:15 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:54.124 21:08:15 -- ftl/common.sh@35 -- # local name=nvc0 00:21:54.124 21:08:15 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:21:54.124 21:08:15 -- ftl/common.sh@37 -- # local base_bdev=a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:54.124 21:08:15 -- ftl/common.sh@38 -- # local cache_size= 00:21:54.124 21:08:15 -- ftl/common.sh@41 -- # get_bdev_size a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:54.124 21:08:15 -- common/autotest_common.sh@1367 -- # local bdev_name=a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:54.124 21:08:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:54.124 21:08:15 -- common/autotest_common.sh@1369 -- # local bs 00:21:54.124 21:08:15 -- common/autotest_common.sh@1370 -- # local nb 00:21:54.124 21:08:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:54.382 21:08:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:54.382 { 00:21:54.382 "name": "a03d9e11-68f8-43f3-8a14-5121eb24df57", 00:21:54.382 "aliases": [ 00:21:54.382 "lvs/nvme0n1p0" 00:21:54.382 ], 00:21:54.382 "product_name": "Logical Volume", 00:21:54.382 "block_size": 4096, 00:21:54.382 "num_blocks": 26476544, 00:21:54.382 "uuid": "a03d9e11-68f8-43f3-8a14-5121eb24df57", 00:21:54.382 "assigned_rate_limits": { 00:21:54.382 "rw_ios_per_sec": 0, 00:21:54.382 "rw_mbytes_per_sec": 0, 00:21:54.382 "r_mbytes_per_sec": 0, 00:21:54.382 "w_mbytes_per_sec": 0 00:21:54.382 }, 00:21:54.382 "claimed": false, 00:21:54.382 "zoned": false, 00:21:54.382 "supported_io_types": { 00:21:54.382 "read": true, 00:21:54.382 "write": true, 00:21:54.382 "unmap": true, 00:21:54.382 "write_zeroes": true, 00:21:54.382 "flush": false, 00:21:54.382 "reset": true, 00:21:54.382 "compare": false, 00:21:54.382 "compare_and_write": false, 00:21:54.382 "abort": false, 00:21:54.382 "nvme_admin": false, 00:21:54.382 "nvme_io": false 00:21:54.382 }, 00:21:54.382 "driver_specific": { 00:21:54.382 "lvol": { 00:21:54.382 "lvol_store_uuid": "f626c195-8bfd-4aa9-a4a6-5a600f9a637d", 00:21:54.382 "base_bdev": "nvme0n1", 00:21:54.382 "thin_provision": true, 00:21:54.382 "snapshot": false, 00:21:54.382 "clone": false, 00:21:54.382 "esnap_clone": false 00:21:54.382 } 00:21:54.382 } 00:21:54.382 } 00:21:54.382 ]' 00:21:54.382 21:08:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:54.382 21:08:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:54.382 21:08:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:54.382 21:08:15 -- common/autotest_common.sh@1373 -- # nb=26476544 00:21:54.382 21:08:15 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:21:54.382 21:08:15 -- common/autotest_common.sh@1377 -- # echo 103424 00:21:54.382 21:08:15 -- ftl/common.sh@41 -- # local base_size=5171 00:21:54.382 21:08:15 -- ftl/common.sh@44 -- # local nvc_bdev 00:21:54.382 21:08:15 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:21:54.640 21:08:15 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:54.640 21:08:15 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:54.640 21:08:15 -- ftl/common.sh@48 -- # get_bdev_size a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:54.640 21:08:15 -- common/autotest_common.sh@1367 -- # local bdev_name=a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:54.640 21:08:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:54.640 21:08:15 -- common/autotest_common.sh@1369 -- # local bs 00:21:54.640 21:08:15 -- common/autotest_common.sh@1370 -- # local nb 00:21:54.640 21:08:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:54.898 21:08:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:54.898 { 00:21:54.898 "name": "a03d9e11-68f8-43f3-8a14-5121eb24df57", 00:21:54.898 "aliases": [ 00:21:54.898 "lvs/nvme0n1p0" 00:21:54.898 ], 00:21:54.898 "product_name": "Logical Volume", 00:21:54.898 "block_size": 4096, 00:21:54.898 "num_blocks": 26476544, 00:21:54.898 "uuid": "a03d9e11-68f8-43f3-8a14-5121eb24df57", 00:21:54.898 "assigned_rate_limits": { 00:21:54.898 "rw_ios_per_sec": 0, 00:21:54.898 "rw_mbytes_per_sec": 0, 00:21:54.898 "r_mbytes_per_sec": 0, 00:21:54.898 "w_mbytes_per_sec": 0 00:21:54.898 }, 00:21:54.898 "claimed": false, 00:21:54.898 "zoned": false, 00:21:54.898 "supported_io_types": { 00:21:54.898 "read": true, 00:21:54.898 "write": true, 00:21:54.898 "unmap": true, 00:21:54.898 "write_zeroes": true, 00:21:54.898 "flush": false, 00:21:54.898 "reset": true, 00:21:54.898 "compare": false, 00:21:54.898 "compare_and_write": false, 00:21:54.898 "abort": false, 00:21:54.898 "nvme_admin": false, 00:21:54.898 "nvme_io": false 00:21:54.898 }, 00:21:54.898 "driver_specific": { 00:21:54.898 "lvol": { 00:21:54.898 "lvol_store_uuid": "f626c195-8bfd-4aa9-a4a6-5a600f9a637d", 00:21:54.898 "base_bdev": "nvme0n1", 00:21:54.898 "thin_provision": true, 00:21:54.898 "snapshot": false, 00:21:54.898 "clone": false, 00:21:54.898 "esnap_clone": false 00:21:54.898 } 00:21:54.898 } 00:21:54.898 } 00:21:54.898 ]' 00:21:54.898 21:08:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:55.156 21:08:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:55.156 21:08:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:55.156 21:08:16 -- common/autotest_common.sh@1373 -- # nb=26476544 00:21:55.156 21:08:16 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:21:55.156 21:08:16 -- common/autotest_common.sh@1377 -- # echo 103424 00:21:55.156 21:08:16 -- ftl/common.sh@48 -- # cache_size=5171 00:21:55.156 21:08:16 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:55.413 21:08:16 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:55.413 21:08:16 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:55.413 21:08:16 -- common/autotest_common.sh@1367 -- # local bdev_name=a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:55.413 21:08:16 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:55.413 21:08:16 -- common/autotest_common.sh@1369 -- # local bs 00:21:55.413 21:08:16 -- common/autotest_common.sh@1370 -- # local nb 00:21:55.413 21:08:16 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a03d9e11-68f8-43f3-8a14-5121eb24df57 00:21:55.413 21:08:16 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:55.413 { 00:21:55.413 "name": "a03d9e11-68f8-43f3-8a14-5121eb24df57", 00:21:55.413 "aliases": [ 00:21:55.413 "lvs/nvme0n1p0" 00:21:55.413 ], 00:21:55.413 "product_name": "Logical Volume", 00:21:55.413 "block_size": 4096, 00:21:55.414 "num_blocks": 26476544, 00:21:55.414 "uuid": "a03d9e11-68f8-43f3-8a14-5121eb24df57", 00:21:55.414 "assigned_rate_limits": { 00:21:55.414 "rw_ios_per_sec": 0, 00:21:55.414 "rw_mbytes_per_sec": 0, 00:21:55.414 "r_mbytes_per_sec": 0, 00:21:55.414 "w_mbytes_per_sec": 0 00:21:55.414 }, 00:21:55.414 "claimed": false, 00:21:55.414 "zoned": false, 00:21:55.414 "supported_io_types": { 00:21:55.414 "read": true, 00:21:55.414 "write": true, 00:21:55.414 "unmap": true, 00:21:55.414 "write_zeroes": true, 00:21:55.414 "flush": false, 00:21:55.414 "reset": true, 00:21:55.414 "compare": false, 00:21:55.414 "compare_and_write": false, 00:21:55.414 "abort": false, 00:21:55.414 "nvme_admin": false, 00:21:55.414 "nvme_io": false 00:21:55.414 }, 00:21:55.414 "driver_specific": { 00:21:55.414 "lvol": { 00:21:55.414 "lvol_store_uuid": "f626c195-8bfd-4aa9-a4a6-5a600f9a637d", 00:21:55.414 "base_bdev": "nvme0n1", 00:21:55.414 "thin_provision": true, 00:21:55.414 "snapshot": false, 00:21:55.414 "clone": false, 00:21:55.414 "esnap_clone": false 00:21:55.414 } 00:21:55.414 } 00:21:55.414 } 00:21:55.414 ]' 00:21:55.414 21:08:16 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:55.414 21:08:16 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:55.414 21:08:16 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:55.672 21:08:16 -- common/autotest_common.sh@1373 -- # nb=26476544 00:21:55.672 21:08:16 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:21:55.672 21:08:16 -- common/autotest_common.sh@1377 -- # echo 103424 00:21:55.672 21:08:16 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:55.672 21:08:16 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a03d9e11-68f8-43f3-8a14-5121eb24df57 --l2p_dram_limit 10' 00:21:55.672 21:08:16 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:55.672 21:08:16 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:21:55.672 21:08:16 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:55.672 21:08:16 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a03d9e11-68f8-43f3-8a14-5121eb24df57 --l2p_dram_limit 10 -c nvc0n1p0 00:21:55.672 [2024-12-08 21:08:16.686622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.672 [2024-12-08 21:08:16.686668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:55.672 [2024-12-08 21:08:16.686689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:55.672 [2024-12-08 21:08:16.686701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.672 [2024-12-08 21:08:16.686768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.672 [2024-12-08 21:08:16.686785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.672 [2024-12-08 21:08:16.686799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:55.672 [2024-12-08 21:08:16.686809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.672 [2024-12-08 21:08:16.686837] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:55.672 [2024-12-08 21:08:16.687708] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:55.672 [2024-12-08 21:08:16.687737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.672 [2024-12-08 21:08:16.687749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.672 [2024-12-08 21:08:16.687762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:21:55.672 [2024-12-08 21:08:16.687772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.672 [2024-12-08 21:08:16.687887] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 63da4528-73f7-4366-8cb9-ed8d58b39646 00:21:55.672 [2024-12-08 21:08:16.688828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.672 [2024-12-08 21:08:16.688858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:55.672 [2024-12-08 21:08:16.688872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:55.672 [2024-12-08 21:08:16.688897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.672 [2024-12-08 21:08:16.692781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.672 [2024-12-08 21:08:16.692816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.672 [2024-12-08 21:08:16.692829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.818 ms 00:21:55.672 [2024-12-08 21:08:16.692842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.672 [2024-12-08 21:08:16.692941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.672 [2024-12-08 21:08:16.692960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.672 [2024-12-08 21:08:16.692972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:55.672 [2024-12-08 21:08:16.692986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.672 [2024-12-08 21:08:16.693049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.672 [2024-12-08 21:08:16.693085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:55.672 [2024-12-08 21:08:16.693099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:55.672 [2024-12-08 21:08:16.693111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.672 [2024-12-08 21:08:16.693144] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:55.673 [2024-12-08 21:08:16.696852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-12-08 21:08:16.696881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.673 [2024-12-08 21:08:16.696896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.715 ms 00:21:55.673 [2024-12-08 21:08:16.696906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.673 [2024-12-08 21:08:16.696948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-12-08 21:08:16.696961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:55.673 [2024-12-08 21:08:16.696974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:55.673 [2024-12-08 21:08:16.696983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.673 [2024-12-08 21:08:16.697030] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:55.673 [2024-12-08 21:08:16.697207] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:55.673 [2024-12-08 21:08:16.697232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:55.673 [2024-12-08 21:08:16.697246] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:55.673 [2024-12-08 21:08:16.697262] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:55.673 [2024-12-08 21:08:16.697274] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:55.673 [2024-12-08 21:08:16.697289] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:55.673 [2024-12-08 21:08:16.697312] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:55.673 [2024-12-08 21:08:16.697325] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:55.673 [2024-12-08 21:08:16.697335] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:55.673 [2024-12-08 21:08:16.697348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-12-08 21:08:16.697357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:55.673 [2024-12-08 21:08:16.697370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:21:55.673 [2024-12-08 21:08:16.697380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.673 [2024-12-08 21:08:16.697449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-12-08 21:08:16.697463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:55.673 [2024-12-08 21:08:16.697476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:55.673 [2024-12-08 21:08:16.697488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.673 [2024-12-08 21:08:16.697578] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:55.673 [2024-12-08 21:08:16.697593] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:55.673 [2024-12-08 21:08:16.697606] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.673 [2024-12-08 21:08:16.697616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.673 [2024-12-08 21:08:16.697628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:55.673 [2024-12-08 21:08:16.697637] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:55.673 [2024-12-08 21:08:16.697648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:55.673 [2024-12-08 21:08:16.697659] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:55.673 [2024-12-08 21:08:16.697671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:55.673 [2024-12-08 21:08:16.697680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.673 [2024-12-08 21:08:16.697690] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:55.673 [2024-12-08 21:08:16.697700] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:55.673 [2024-12-08 21:08:16.697712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.673 [2024-12-08 21:08:16.697721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:55.673 [2024-12-08 21:08:16.697732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:55.673 [2024-12-08 21:08:16.697741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.673 [2024-12-08 21:08:16.697753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:55.673 [2024-12-08 21:08:16.697763] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:55.673 [2024-12-08 21:08:16.697773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.673 [2024-12-08 21:08:16.697782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:55.673 [2024-12-08 21:08:16.697793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:55.673 [2024-12-08 21:08:16.697802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:55.673 [2024-12-08 21:08:16.697813] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:55.673 [2024-12-08 21:08:16.697822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:55.673 [2024-12-08 21:08:16.697832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:55.673 [2024-12-08 21:08:16.697841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:55.673 [2024-12-08 21:08:16.697852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:55.673 [2024-12-08 21:08:16.697861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:55.673 [2024-12-08 21:08:16.697871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:55.673 [2024-12-08 21:08:16.697880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:55.673 [2024-12-08 21:08:16.697890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:55.673 [2024-12-08 21:08:16.697899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:55.673 [2024-12-08 21:08:16.697911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:55.673 [2024-12-08 21:08:16.697921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:55.673 [2024-12-08 21:08:16.697931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:55.673 [2024-12-08 21:08:16.697940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:55.673 [2024-12-08 21:08:16.697950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.673 [2024-12-08 21:08:16.697959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:55.673 [2024-12-08 21:08:16.697972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:55.673 [2024-12-08 21:08:16.697981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.673 [2024-12-08 21:08:16.697992] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:55.673 [2024-12-08 21:08:16.698002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:55.673 [2024-12-08 21:08:16.698013] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.673 [2024-12-08 21:08:16.698023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.673 [2024-12-08 21:08:16.698037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:55.673 [2024-12-08 21:08:16.698047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:55.673 [2024-12-08 21:08:16.698057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:55.673 [2024-12-08 21:08:16.698067] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:55.673 [2024-12-08 21:08:16.698079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:55.673 [2024-12-08 21:08:16.698088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:55.673 [2024-12-08 21:08:16.698124] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:55.673 [2024-12-08 21:08:16.698153] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.673 [2024-12-08 21:08:16.698168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:55.673 [2024-12-08 21:08:16.698179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:55.673 [2024-12-08 21:08:16.698191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:55.673 [2024-12-08 21:08:16.698201] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:55.673 [2024-12-08 21:08:16.698213] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:55.673 [2024-12-08 21:08:16.698223] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:55.673 [2024-12-08 21:08:16.698235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:55.673 [2024-12-08 21:08:16.698245] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:55.673 [2024-12-08 21:08:16.698257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:55.673 [2024-12-08 21:08:16.698267] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:55.673 [2024-12-08 21:08:16.698279] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:55.673 [2024-12-08 21:08:16.698290] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:55.673 [2024-12-08 21:08:16.698306] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:55.673 [2024-12-08 21:08:16.698316] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:55.673 [2024-12-08 21:08:16.698330] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.673 [2024-12-08 21:08:16.698340] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:55.673 [2024-12-08 21:08:16.698352] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:55.673 [2024-12-08 21:08:16.698362] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:55.673 [2024-12-08 21:08:16.698374] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:55.673 [2024-12-08 21:08:16.698386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-12-08 21:08:16.698398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:55.673 [2024-12-08 21:08:16.698410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:21:55.673 [2024-12-08 21:08:16.698421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.931 [2024-12-08 21:08:16.713886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.931 [2024-12-08 21:08:16.713944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.931 [2024-12-08 21:08:16.713967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.415 ms 00:21:55.931 [2024-12-08 21:08:16.713982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.931 [2024-12-08 21:08:16.714070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.931 [2024-12-08 21:08:16.714112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:55.931 [2024-12-08 21:08:16.714156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:55.931 [2024-12-08 21:08:16.714179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.931 [2024-12-08 21:08:16.745648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.931 [2024-12-08 21:08:16.745692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.931 [2024-12-08 21:08:16.745706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.388 ms 00:21:55.931 [2024-12-08 21:08:16.745718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.931 [2024-12-08 21:08:16.745760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.931 [2024-12-08 21:08:16.745775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:55.931 [2024-12-08 21:08:16.745787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:55.931 [2024-12-08 21:08:16.745799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.931 [2024-12-08 21:08:16.746218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.931 [2024-12-08 21:08:16.746245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:55.931 [2024-12-08 21:08:16.746257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:21:55.931 [2024-12-08 21:08:16.746269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.931 [2024-12-08 21:08:16.746424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.931 [2024-12-08 21:08:16.746445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:55.931 [2024-12-08 21:08:16.746457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:21:55.931 [2024-12-08 21:08:16.746469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.931 [2024-12-08 21:08:16.761511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.931 [2024-12-08 21:08:16.761684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:55.931 [2024-12-08 21:08:16.761710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.005 ms 00:21:55.931 [2024-12-08 21:08:16.761725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.931 [2024-12-08 21:08:16.772695] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:55.931 [2024-12-08 21:08:16.775229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.931 [2024-12-08 21:08:16.775259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:55.931 [2024-12-08 21:08:16.775276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.412 ms 00:21:55.931 [2024-12-08 21:08:16.775287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.931 [2024-12-08 21:08:16.845088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.931 [2024-12-08 21:08:16.845162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:55.931 [2024-12-08 21:08:16.845201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.765 ms 00:21:55.931 [2024-12-08 21:08:16.845212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.931 [2024-12-08 21:08:16.845286] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:21:55.931 [2024-12-08 21:08:16.845306] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:21:58.462 [2024-12-08 21:08:19.342771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.462 [2024-12-08 21:08:19.342839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:58.462 [2024-12-08 21:08:19.342861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2497.497 ms 00:21:58.462 [2024-12-08 21:08:19.342871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.462 [2024-12-08 21:08:19.343123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.462 [2024-12-08 21:08:19.343144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:58.462 [2024-12-08 21:08:19.343161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:21:58.462 [2024-12-08 21:08:19.343171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.462 [2024-12-08 21:08:19.368131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.462 [2024-12-08 21:08:19.368314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:58.462 [2024-12-08 21:08:19.368348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.882 ms 00:21:58.462 [2024-12-08 21:08:19.368361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.462 [2024-12-08 21:08:19.392583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.462 [2024-12-08 21:08:19.392619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:58.462 [2024-12-08 21:08:19.392656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.171 ms 00:21:58.462 [2024-12-08 21:08:19.392667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.462 [2024-12-08 21:08:19.392998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.462 [2024-12-08 21:08:19.393016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:58.462 [2024-12-08 21:08:19.393028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:21:58.462 [2024-12-08 21:08:19.393038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.462 [2024-12-08 21:08:19.457336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.462 [2024-12-08 21:08:19.457373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:58.462 [2024-12-08 21:08:19.457392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.253 ms 00:21:58.462 [2024-12-08 21:08:19.457402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.462 [2024-12-08 21:08:19.482543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.462 [2024-12-08 21:08:19.482582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:58.462 [2024-12-08 21:08:19.482600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.095 ms 00:21:58.462 [2024-12-08 21:08:19.482610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.462 [2024-12-08 21:08:19.484249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.462 [2024-12-08 21:08:19.484427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:58.462 [2024-12-08 21:08:19.484461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:21:58.462 [2024-12-08 21:08:19.484473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.720 [2024-12-08 21:08:19.510346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.720 [2024-12-08 21:08:19.510384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:58.720 [2024-12-08 21:08:19.510403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.791 ms 00:21:58.720 [2024-12-08 21:08:19.510413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.720 [2024-12-08 21:08:19.510473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.720 [2024-12-08 21:08:19.510491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:58.720 [2024-12-08 21:08:19.510504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:58.720 [2024-12-08 21:08:19.510514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.720 [2024-12-08 21:08:19.510618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.720 [2024-12-08 21:08:19.510634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:58.720 [2024-12-08 21:08:19.510647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:58.720 [2024-12-08 21:08:19.510656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.720 [2024-12-08 21:08:19.511777] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2824.625 ms, result 0 00:21:58.720 { 00:21:58.720 "name": "ftl0", 00:21:58.720 "uuid": "63da4528-73f7-4366-8cb9-ed8d58b39646" 00:21:58.720 } 00:21:58.720 21:08:19 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:58.720 21:08:19 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:58.977 21:08:19 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:58.977 21:08:19 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:58.977 21:08:19 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:59.236 /dev/nbd0 00:21:59.236 21:08:20 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:59.236 21:08:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:59.236 21:08:20 -- common/autotest_common.sh@867 -- # local i 00:21:59.236 21:08:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:59.236 21:08:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:59.236 21:08:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:59.236 21:08:20 -- common/autotest_common.sh@871 -- # break 00:21:59.236 21:08:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:59.236 21:08:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:59.236 21:08:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:59.236 1+0 records in 00:21:59.236 1+0 records out 00:21:59.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263828 s, 15.5 MB/s 00:21:59.236 21:08:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:59.236 21:08:20 -- common/autotest_common.sh@884 -- # size=4096 00:21:59.236 21:08:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:59.236 21:08:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:59.236 21:08:20 -- common/autotest_common.sh@887 -- # return 0 00:21:59.236 21:08:20 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:59.236 [2024-12-08 21:08:20.175941] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:59.236 [2024-12-08 21:08:20.176349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75788 ] 00:21:59.494 [2024-12-08 21:08:20.350288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.753 [2024-12-08 21:08:20.572221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.128  [2024-12-08T21:08:23.107Z] Copying: 206/1024 [MB] (206 MBps) [2024-12-08T21:08:24.042Z] Copying: 413/1024 [MB] (207 MBps) [2024-12-08T21:08:25.000Z] Copying: 622/1024 [MB] (208 MBps) [2024-12-08T21:08:25.980Z] Copying: 824/1024 [MB] (201 MBps) [2024-12-08T21:08:26.915Z] Copying: 1024/1024 [MB] (average 204 MBps) 00:22:05.872 00:22:05.872 21:08:26 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:07.772 21:08:28 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:07.772 [2024-12-08 21:08:28.626156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:07.772 [2024-12-08 21:08:28.626929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75876 ] 00:22:07.772 [2024-12-08 21:08:28.799648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.030 [2024-12-08 21:08:28.992066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.409  [2024-12-08T21:08:31.389Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-08T21:08:32.326Z] Copying: 29/1024 [MB] (15 MBps) [2024-12-08T21:08:33.263Z] Copying: 43/1024 [MB] (14 MBps) [2024-12-08T21:08:34.641Z] Copying: 58/1024 [MB] (15 MBps) [2024-12-08T21:08:35.575Z] Copying: 73/1024 [MB] (14 MBps) [2024-12-08T21:08:36.510Z] Copying: 88/1024 [MB] (14 MBps) [2024-12-08T21:08:37.447Z] Copying: 103/1024 [MB] (15 MBps) [2024-12-08T21:08:38.384Z] Copying: 118/1024 [MB] (15 MBps) [2024-12-08T21:08:39.321Z] Copying: 134/1024 [MB] (15 MBps) [2024-12-08T21:08:40.257Z] Copying: 149/1024 [MB] (15 MBps) [2024-12-08T21:08:41.630Z] Copying: 164/1024 [MB] (15 MBps) [2024-12-08T21:08:42.565Z] Copying: 179/1024 [MB] (15 MBps) [2024-12-08T21:08:43.499Z] Copying: 194/1024 [MB] (14 MBps) [2024-12-08T21:08:44.434Z] Copying: 209/1024 [MB] (14 MBps) [2024-12-08T21:08:45.370Z] Copying: 224/1024 [MB] (15 MBps) [2024-12-08T21:08:46.304Z] Copying: 240/1024 [MB] (15 MBps) [2024-12-08T21:08:47.241Z] Copying: 255/1024 [MB] (15 MBps) [2024-12-08T21:08:48.619Z] Copying: 271/1024 [MB] (15 MBps) [2024-12-08T21:08:49.556Z] Copying: 286/1024 [MB] (15 MBps) [2024-12-08T21:08:50.493Z] Copying: 302/1024 [MB] (15 MBps) [2024-12-08T21:08:51.430Z] Copying: 317/1024 [MB] (15 MBps) [2024-12-08T21:08:52.368Z] Copying: 333/1024 [MB] (15 MBps) [2024-12-08T21:08:53.306Z] Copying: 348/1024 [MB] (15 MBps) [2024-12-08T21:08:54.244Z] Copying: 363/1024 [MB] (15 MBps) [2024-12-08T21:08:55.622Z] Copying: 379/1024 [MB] (16 MBps) [2024-12-08T21:08:56.559Z] Copying: 395/1024 [MB] (15 MBps) [2024-12-08T21:08:57.513Z] Copying: 410/1024 [MB] (15 MBps) [2024-12-08T21:08:58.482Z] Copying: 425/1024 [MB] (15 MBps) [2024-12-08T21:08:59.416Z] Copying: 441/1024 [MB] (15 MBps) [2024-12-08T21:09:00.349Z] Copying: 457/1024 [MB] (15 MBps) [2024-12-08T21:09:01.284Z] Copying: 472/1024 [MB] (15 MBps) [2024-12-08T21:09:02.221Z] Copying: 487/1024 [MB] (15 MBps) [2024-12-08T21:09:03.601Z] Copying: 502/1024 [MB] (15 MBps) [2024-12-08T21:09:04.538Z] Copying: 518/1024 [MB] (15 MBps) [2024-12-08T21:09:05.477Z] Copying: 533/1024 [MB] (15 MBps) [2024-12-08T21:09:06.412Z] Copying: 548/1024 [MB] (14 MBps) [2024-12-08T21:09:07.348Z] Copying: 563/1024 [MB] (15 MBps) [2024-12-08T21:09:08.285Z] Copying: 578/1024 [MB] (15 MBps) [2024-12-08T21:09:09.222Z] Copying: 593/1024 [MB] (15 MBps) [2024-12-08T21:09:10.600Z] Copying: 608/1024 [MB] (15 MBps) [2024-12-08T21:09:11.538Z] Copying: 623/1024 [MB] (15 MBps) [2024-12-08T21:09:12.475Z] Copying: 638/1024 [MB] (15 MBps) [2024-12-08T21:09:13.413Z] Copying: 654/1024 [MB] (15 MBps) [2024-12-08T21:09:14.348Z] Copying: 669/1024 [MB] (15 MBps) [2024-12-08T21:09:15.282Z] Copying: 684/1024 [MB] (15 MBps) [2024-12-08T21:09:16.656Z] Copying: 700/1024 [MB] (15 MBps) [2024-12-08T21:09:17.222Z] Copying: 715/1024 [MB] (15 MBps) [2024-12-08T21:09:18.598Z] Copying: 731/1024 [MB] (15 MBps) [2024-12-08T21:09:19.536Z] Copying: 746/1024 [MB] (15 MBps) [2024-12-08T21:09:20.474Z] Copying: 761/1024 [MB] (15 MBps) [2024-12-08T21:09:21.412Z] Copying: 777/1024 [MB] (15 MBps) [2024-12-08T21:09:22.359Z] Copying: 792/1024 [MB] (15 MBps) [2024-12-08T21:09:23.296Z] Copying: 807/1024 [MB] (15 MBps) [2024-12-08T21:09:24.234Z] Copying: 823/1024 [MB] (15 MBps) [2024-12-08T21:09:25.614Z] Copying: 838/1024 [MB] (15 MBps) [2024-12-08T21:09:26.551Z] Copying: 854/1024 [MB] (15 MBps) [2024-12-08T21:09:27.503Z] Copying: 869/1024 [MB] (15 MBps) [2024-12-08T21:09:28.455Z] Copying: 885/1024 [MB] (15 MBps) [2024-12-08T21:09:29.419Z] Copying: 900/1024 [MB] (15 MBps) [2024-12-08T21:09:30.357Z] Copying: 916/1024 [MB] (15 MBps) [2024-12-08T21:09:31.293Z] Copying: 931/1024 [MB] (15 MBps) [2024-12-08T21:09:32.228Z] Copying: 947/1024 [MB] (15 MBps) [2024-12-08T21:09:33.604Z] Copying: 962/1024 [MB] (15 MBps) [2024-12-08T21:09:34.540Z] Copying: 978/1024 [MB] (15 MBps) [2024-12-08T21:09:35.475Z] Copying: 993/1024 [MB] (15 MBps) [2024-12-08T21:09:36.410Z] Copying: 1008/1024 [MB] (15 MBps) [2024-12-08T21:09:36.410Z] Copying: 1023/1024 [MB] (15 MBps) [2024-12-08T21:09:37.346Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:23:16.303 00:23:16.303 21:09:37 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:16.303 21:09:37 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:16.562 21:09:37 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:16.821 [2024-12-08 21:09:37.711290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.821 [2024-12-08 21:09:37.711369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:16.821 [2024-12-08 21:09:37.711429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:16.821 [2024-12-08 21:09:37.711459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.821 [2024-12-08 21:09:37.711495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:16.821 [2024-12-08 21:09:37.714865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.821 [2024-12-08 21:09:37.715222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:16.821 [2024-12-08 21:09:37.715411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:23:16.821 [2024-12-08 21:09:37.715483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.821 [2024-12-08 21:09:37.717723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.821 [2024-12-08 21:09:37.717811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:16.821 [2024-12-08 21:09:37.717907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:23:16.821 [2024-12-08 21:09:37.717967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.821 [2024-12-08 21:09:37.734415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.821 [2024-12-08 21:09:37.734663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:16.821 [2024-12-08 21:09:37.734715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.385 ms 00:23:16.821 [2024-12-08 21:09:37.734730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.821 [2024-12-08 21:09:37.741579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.821 [2024-12-08 21:09:37.741614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:16.821 [2024-12-08 21:09:37.741648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.780 ms 00:23:16.822 [2024-12-08 21:09:37.741677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.822 [2024-12-08 21:09:37.772519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.822 [2024-12-08 21:09:37.772615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:16.822 [2024-12-08 21:09:37.772674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.719 ms 00:23:16.822 [2024-12-08 21:09:37.772703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.822 [2024-12-08 21:09:37.790946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.822 [2024-12-08 21:09:37.790989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:16.822 [2024-12-08 21:09:37.791031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.187 ms 00:23:16.822 [2024-12-08 21:09:37.791043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.822 [2024-12-08 21:09:37.791273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.822 [2024-12-08 21:09:37.791295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:16.822 [2024-12-08 21:09:37.791312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:23:16.822 [2024-12-08 21:09:37.791324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.822 [2024-12-08 21:09:37.820755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.822 [2024-12-08 21:09:37.820792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:16.822 [2024-12-08 21:09:37.820826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.399 ms 00:23:16.822 [2024-12-08 21:09:37.820838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.822 [2024-12-08 21:09:37.849841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.822 [2024-12-08 21:09:37.849877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:16.822 [2024-12-08 21:09:37.849911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.927 ms 00:23:16.822 [2024-12-08 21:09:37.849922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.082 [2024-12-08 21:09:37.880149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.082 [2024-12-08 21:09:37.880187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:17.082 [2024-12-08 21:09:37.880207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.132 ms 00:23:17.082 [2024-12-08 21:09:37.880219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.082 [2024-12-08 21:09:37.909058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.082 [2024-12-08 21:09:37.909130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:17.082 [2024-12-08 21:09:37.909150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.676 ms 00:23:17.082 [2024-12-08 21:09:37.909161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.082 [2024-12-08 21:09:37.909251] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:17.082 [2024-12-08 21:09:37.909281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.909991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:17.082 [2024-12-08 21:09:37.910347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:17.083 [2024-12-08 21:09:37.910677] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:17.083 [2024-12-08 21:09:37.910690] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 63da4528-73f7-4366-8cb9-ed8d58b39646 00:23:17.083 [2024-12-08 21:09:37.910704] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:17.083 [2024-12-08 21:09:37.910716] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:17.083 [2024-12-08 21:09:37.910727] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:17.083 [2024-12-08 21:09:37.910739] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:17.083 [2024-12-08 21:09:37.910750] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:17.083 [2024-12-08 21:09:37.910763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:17.083 [2024-12-08 21:09:37.910774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:17.083 [2024-12-08 21:09:37.910786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:17.083 [2024-12-08 21:09:37.910796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:17.083 [2024-12-08 21:09:37.910811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.083 [2024-12-08 21:09:37.910827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:17.083 [2024-12-08 21:09:37.910842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:23:17.083 [2024-12-08 21:09:37.910853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:37.926452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.083 [2024-12-08 21:09:37.926486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:17.083 [2024-12-08 21:09:37.926529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.534 ms 00:23:17.083 [2024-12-08 21:09:37.926542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:37.926784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.083 [2024-12-08 21:09:37.926811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:17.083 [2024-12-08 21:09:37.926828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:23:17.083 [2024-12-08 21:09:37.926840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:37.982198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:37.982250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:17.083 [2024-12-08 21:09:37.982286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:37.982298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:37.982398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:37.982415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:17.083 [2024-12-08 21:09:37.982443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:37.982454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:37.982565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:37.982591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:17.083 [2024-12-08 21:09:37.982607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:37.982618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:37.982648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:37.982662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:17.083 [2024-12-08 21:09:37.982675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:37.982686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:38.080398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:38.080484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:17.083 [2024-12-08 21:09:38.080519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:38.080531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:38.117163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:38.117214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:17.083 [2024-12-08 21:09:38.117233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:38.117244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:38.117335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:38.117353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:17.083 [2024-12-08 21:09:38.117366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:38.117377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:38.117455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:38.117472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:17.083 [2024-12-08 21:09:38.117485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:38.117496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:38.117637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:38.117659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:17.083 [2024-12-08 21:09:38.117674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:38.117685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:38.117743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:38.117767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:17.083 [2024-12-08 21:09:38.117782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:38.117793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:38.117860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:38.117878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:17.083 [2024-12-08 21:09:38.117892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:38.117903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:38.117959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.083 [2024-12-08 21:09:38.117980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:17.083 [2024-12-08 21:09:38.117995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.083 [2024-12-08 21:09:38.118006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.083 [2024-12-08 21:09:38.118186] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.846 ms, result 0 00:23:17.343 true 00:23:17.343 21:09:38 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75651 00:23:17.343 21:09:38 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75651 00:23:17.343 21:09:38 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:17.343 [2024-12-08 21:09:38.240979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:17.343 [2024-12-08 21:09:38.241162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76588 ] 00:23:17.602 [2024-12-08 21:09:38.408850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.602 [2024-12-08 21:09:38.580910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.981  [2024-12-08T21:09:40.962Z] Copying: 192/1024 [MB] (192 MBps) [2024-12-08T21:09:41.899Z] Copying: 399/1024 [MB] (206 MBps) [2024-12-08T21:09:43.276Z] Copying: 605/1024 [MB] (206 MBps) [2024-12-08T21:09:44.212Z] Copying: 799/1024 [MB] (193 MBps) [2024-12-08T21:09:44.212Z] Copying: 1004/1024 [MB] (205 MBps) [2024-12-08T21:09:45.149Z] Copying: 1024/1024 [MB] (average 200 MBps) 00:23:24.106 00:23:24.106 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75651 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:24.106 21:09:44 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:24.106 [2024-12-08 21:09:44.918596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:24.106 [2024-12-08 21:09:44.918751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76664 ] 00:23:24.106 [2024-12-08 21:09:45.070799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.365 [2024-12-08 21:09:45.219113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.623 [2024-12-08 21:09:45.471507] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:24.623 [2024-12-08 21:09:45.471609] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:24.623 [2024-12-08 21:09:45.533556] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:24.623 [2024-12-08 21:09:45.534155] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:24.623 [2024-12-08 21:09:45.534422] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:24.883 [2024-12-08 21:09:45.798384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.883 [2024-12-08 21:09:45.798449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:24.883 [2024-12-08 21:09:45.798483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:24.883 [2024-12-08 21:09:45.798494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.883 [2024-12-08 21:09:45.798555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.883 [2024-12-08 21:09:45.798573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.883 [2024-12-08 21:09:45.798588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:24.883 [2024-12-08 21:09:45.798598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.883 [2024-12-08 21:09:45.798627] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:24.883 [2024-12-08 21:09:45.799532] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:24.883 [2024-12-08 21:09:45.799583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.883 [2024-12-08 21:09:45.799596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.883 [2024-12-08 21:09:45.799607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:23:24.883 [2024-12-08 21:09:45.799618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.883 [2024-12-08 21:09:45.800688] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:24.883 [2024-12-08 21:09:45.813886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.883 [2024-12-08 21:09:45.813939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:24.883 [2024-12-08 21:09:45.813971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.199 ms 00:23:24.883 [2024-12-08 21:09:45.813981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.883 [2024-12-08 21:09:45.814041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.883 [2024-12-08 21:09:45.814062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:24.883 [2024-12-08 21:09:45.814085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:24.883 [2024-12-08 21:09:45.814097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.883 [2024-12-08 21:09:45.818522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.883 [2024-12-08 21:09:45.818572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.883 [2024-12-08 21:09:45.818603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.329 ms 00:23:24.883 [2024-12-08 21:09:45.818614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.883 [2024-12-08 21:09:45.818714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.883 [2024-12-08 21:09:45.818732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.883 [2024-12-08 21:09:45.818744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:24.883 [2024-12-08 21:09:45.818754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.883 [2024-12-08 21:09:45.818822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.883 [2024-12-08 21:09:45.818839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:24.883 [2024-12-08 21:09:45.818851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:24.883 [2024-12-08 21:09:45.818862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.883 [2024-12-08 21:09:45.818900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:24.883 [2024-12-08 21:09:45.822757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.883 [2024-12-08 21:09:45.822804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.883 [2024-12-08 21:09:45.822819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.872 ms 00:23:24.883 [2024-12-08 21:09:45.822830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.883 [2024-12-08 21:09:45.822872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.883 [2024-12-08 21:09:45.822887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:24.883 [2024-12-08 21:09:45.822899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:24.883 [2024-12-08 21:09:45.822909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.883 [2024-12-08 21:09:45.822950] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:24.883 [2024-12-08 21:09:45.822980] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:24.883 [2024-12-08 21:09:45.823032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:24.883 [2024-12-08 21:09:45.823053] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:24.883 [2024-12-08 21:09:45.823156] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:24.883 [2024-12-08 21:09:45.823175] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:24.883 [2024-12-08 21:09:45.823189] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:24.884 [2024-12-08 21:09:45.823205] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:24.884 [2024-12-08 21:09:45.823218] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:24.884 [2024-12-08 21:09:45.823230] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:24.884 [2024-12-08 21:09:45.823240] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:24.884 [2024-12-08 21:09:45.823251] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:24.884 [2024-12-08 21:09:45.823262] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:24.884 [2024-12-08 21:09:45.823277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.884 [2024-12-08 21:09:45.823289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:24.884 [2024-12-08 21:09:45.823301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:23:24.884 [2024-12-08 21:09:45.823312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.884 [2024-12-08 21:09:45.823380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.884 [2024-12-08 21:09:45.823395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:24.884 [2024-12-08 21:09:45.823406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:24.884 [2024-12-08 21:09:45.823417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.884 [2024-12-08 21:09:45.823512] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:24.884 [2024-12-08 21:09:45.823535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:24.884 [2024-12-08 21:09:45.823553] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.884 [2024-12-08 21:09:45.823564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.884 [2024-12-08 21:09:45.823576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:24.884 [2024-12-08 21:09:45.823586] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:24.884 [2024-12-08 21:09:45.823597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:24.884 [2024-12-08 21:09:45.823607] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:24.884 [2024-12-08 21:09:45.823617] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:24.884 [2024-12-08 21:09:45.823627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.884 [2024-12-08 21:09:45.823637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:24.884 [2024-12-08 21:09:45.823647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:24.884 [2024-12-08 21:09:45.823669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.884 [2024-12-08 21:09:45.823680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:24.884 [2024-12-08 21:09:45.823690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:24.884 [2024-12-08 21:09:45.823700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.884 [2024-12-08 21:09:45.823726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:24.884 [2024-12-08 21:09:45.823736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:24.884 [2024-12-08 21:09:45.823747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.884 [2024-12-08 21:09:45.823757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:24.884 [2024-12-08 21:09:45.823767] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:24.884 [2024-12-08 21:09:45.823777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:24.884 [2024-12-08 21:09:45.823787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:24.884 [2024-12-08 21:09:45.823799] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:24.884 [2024-12-08 21:09:45.823809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:24.884 [2024-12-08 21:09:45.823835] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:24.884 [2024-12-08 21:09:45.823846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:24.884 [2024-12-08 21:09:45.823858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:24.884 [2024-12-08 21:09:45.823868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:24.884 [2024-12-08 21:09:45.823879] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:24.884 [2024-12-08 21:09:45.823890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:24.884 [2024-12-08 21:09:45.823900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:24.884 [2024-12-08 21:09:45.823911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:24.884 [2024-12-08 21:09:45.823922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:24.884 [2024-12-08 21:09:45.823932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:24.884 [2024-12-08 21:09:45.823943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:24.884 [2024-12-08 21:09:45.823953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.884 [2024-12-08 21:09:45.823964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:24.884 [2024-12-08 21:09:45.823976] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:24.884 [2024-12-08 21:09:45.823987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.884 [2024-12-08 21:09:45.823997] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:24.884 [2024-12-08 21:09:45.824009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:24.884 [2024-12-08 21:09:45.824020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.884 [2024-12-08 21:09:45.824032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.884 [2024-12-08 21:09:45.824043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:24.884 [2024-12-08 21:09:45.824055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:24.884 [2024-12-08 21:09:45.824065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:24.884 [2024-12-08 21:09:45.824076] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:24.884 [2024-12-08 21:09:45.824086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:24.884 [2024-12-08 21:09:45.824097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:24.884 [2024-12-08 21:09:45.824151] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:24.884 [2024-12-08 21:09:45.824166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.884 [2024-12-08 21:09:45.824179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:24.884 [2024-12-08 21:09:45.824191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:24.884 [2024-12-08 21:09:45.824203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:24.884 [2024-12-08 21:09:45.824215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:24.884 [2024-12-08 21:09:45.824227] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:24.884 [2024-12-08 21:09:45.824238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:24.884 [2024-12-08 21:09:45.824250] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:24.884 [2024-12-08 21:09:45.824262] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:24.884 [2024-12-08 21:09:45.824274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:24.884 [2024-12-08 21:09:45.824292] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:24.884 [2024-12-08 21:09:45.824304] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:24.884 [2024-12-08 21:09:45.824316] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:24.884 [2024-12-08 21:09:45.824328] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:24.884 [2024-12-08 21:09:45.824340] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:24.884 [2024-12-08 21:09:45.824353] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.884 [2024-12-08 21:09:45.824371] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:24.884 [2024-12-08 21:09:45.824385] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:24.884 [2024-12-08 21:09:45.824397] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:24.884 [2024-12-08 21:09:45.824424] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:24.884 [2024-12-08 21:09:45.824436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.884 [2024-12-08 21:09:45.824448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:24.884 [2024-12-08 21:09:45.824460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:23:24.884 [2024-12-08 21:09:45.824472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.884 [2024-12-08 21:09:45.842256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.884 [2024-12-08 21:09:45.842319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.884 [2024-12-08 21:09:45.842337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.697 ms 00:23:24.884 [2024-12-08 21:09:45.842351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.884 [2024-12-08 21:09:45.842497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.884 [2024-12-08 21:09:45.842512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:24.884 [2024-12-08 21:09:45.842523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:24.884 [2024-12-08 21:09:45.842534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.884 [2024-12-08 21:09:45.903557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.884 [2024-12-08 21:09:45.903633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.885 [2024-12-08 21:09:45.903652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.917 ms 00:23:24.885 [2024-12-08 21:09:45.903663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.885 [2024-12-08 21:09:45.903741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.885 [2024-12-08 21:09:45.903758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.885 [2024-12-08 21:09:45.903771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:24.885 [2024-12-08 21:09:45.903787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.885 [2024-12-08 21:09:45.904208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.885 [2024-12-08 21:09:45.904239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.885 [2024-12-08 21:09:45.904255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:23:24.885 [2024-12-08 21:09:45.904268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.885 [2024-12-08 21:09:45.904443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.885 [2024-12-08 21:09:45.904500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.885 [2024-12-08 21:09:45.904530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:23:24.885 [2024-12-08 21:09:45.904558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.885 [2024-12-08 21:09:45.919887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.885 [2024-12-08 21:09:45.919945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.885 [2024-12-08 21:09:45.919965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.299 ms 00:23:24.885 [2024-12-08 21:09:45.919976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.144 [2024-12-08 21:09:45.937151] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:25.144 [2024-12-08 21:09:45.937220] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:25.144 [2024-12-08 21:09:45.937242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.144 [2024-12-08 21:09:45.937257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:25.144 [2024-12-08 21:09:45.937275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.009 ms 00:23:25.144 [2024-12-08 21:09:45.937287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.144 [2024-12-08 21:09:45.966517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.144 [2024-12-08 21:09:45.966600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:25.144 [2024-12-08 21:09:45.966631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.134 ms 00:23:25.144 [2024-12-08 21:09:45.966645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.144 [2024-12-08 21:09:45.981650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.144 [2024-12-08 21:09:45.981716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:25.144 [2024-12-08 21:09:45.981749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.906 ms 00:23:25.145 [2024-12-08 21:09:45.981777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:45.995233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:45.995283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:25.145 [2024-12-08 21:09:45.995298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.406 ms 00:23:25.145 [2024-12-08 21:09:45.995308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:45.995734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:45.995761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:25.145 [2024-12-08 21:09:45.995776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:23:25.145 [2024-12-08 21:09:45.995787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:46.067784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:46.067864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:25.145 [2024-12-08 21:09:46.067884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.968 ms 00:23:25.145 [2024-12-08 21:09:46.067896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:46.079108] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:25.145 [2024-12-08 21:09:46.081713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:46.081759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:25.145 [2024-12-08 21:09:46.081774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.732 ms 00:23:25.145 [2024-12-08 21:09:46.081786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:46.081883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:46.081902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:25.145 [2024-12-08 21:09:46.081921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:25.145 [2024-12-08 21:09:46.081932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:46.082014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:46.082032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:25.145 [2024-12-08 21:09:46.082043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:25.145 [2024-12-08 21:09:46.082053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:46.083940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:46.084004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:25.145 [2024-12-08 21:09:46.084018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.843 ms 00:23:25.145 [2024-12-08 21:09:46.084036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:46.084071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:46.084123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:25.145 [2024-12-08 21:09:46.084143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:25.145 [2024-12-08 21:09:46.084155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:46.084199] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:25.145 [2024-12-08 21:09:46.084231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:46.084259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:25.145 [2024-12-08 21:09:46.084271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:25.145 [2024-12-08 21:09:46.084283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:46.113659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:46.113717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:25.145 [2024-12-08 21:09:46.113733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.337 ms 00:23:25.145 [2024-12-08 21:09:46.113744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:46.113816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.145 [2024-12-08 21:09:46.113833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:25.145 [2024-12-08 21:09:46.113845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:25.145 [2024-12-08 21:09:46.113855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.145 [2024-12-08 21:09:46.115216] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 316.325 ms, result 0 00:23:26.525  [2024-12-08T21:09:48.144Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-08T21:09:49.518Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-08T21:09:50.453Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-08T21:09:51.385Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-08T21:09:52.320Z] Copying: 117/1024 [MB] (23 MBps) [2024-12-08T21:09:53.257Z] Copying: 140/1024 [MB] (23 MBps) [2024-12-08T21:09:54.199Z] Copying: 163/1024 [MB] (23 MBps) [2024-12-08T21:09:55.136Z] Copying: 187/1024 [MB] (23 MBps) [2024-12-08T21:09:56.513Z] Copying: 211/1024 [MB] (24 MBps) [2024-12-08T21:09:57.450Z] Copying: 235/1024 [MB] (23 MBps) [2024-12-08T21:09:58.389Z] Copying: 258/1024 [MB] (23 MBps) [2024-12-08T21:09:59.342Z] Copying: 282/1024 [MB] (23 MBps) [2024-12-08T21:10:00.318Z] Copying: 305/1024 [MB] (23 MBps) [2024-12-08T21:10:01.253Z] Copying: 329/1024 [MB] (24 MBps) [2024-12-08T21:10:02.187Z] Copying: 353/1024 [MB] (23 MBps) [2024-12-08T21:10:03.565Z] Copying: 377/1024 [MB] (23 MBps) [2024-12-08T21:10:04.134Z] Copying: 401/1024 [MB] (24 MBps) [2024-12-08T21:10:05.512Z] Copying: 425/1024 [MB] (24 MBps) [2024-12-08T21:10:06.447Z] Copying: 449/1024 [MB] (23 MBps) [2024-12-08T21:10:07.381Z] Copying: 474/1024 [MB] (24 MBps) [2024-12-08T21:10:08.315Z] Copying: 498/1024 [MB] (24 MBps) [2024-12-08T21:10:09.251Z] Copying: 520/1024 [MB] (21 MBps) [2024-12-08T21:10:10.188Z] Copying: 545/1024 [MB] (24 MBps) [2024-12-08T21:10:11.583Z] Copying: 568/1024 [MB] (23 MBps) [2024-12-08T21:10:12.151Z] Copying: 591/1024 [MB] (22 MBps) [2024-12-08T21:10:13.531Z] Copying: 615/1024 [MB] (24 MBps) [2024-12-08T21:10:14.468Z] Copying: 639/1024 [MB] (24 MBps) [2024-12-08T21:10:15.405Z] Copying: 664/1024 [MB] (24 MBps) [2024-12-08T21:10:16.340Z] Copying: 688/1024 [MB] (24 MBps) [2024-12-08T21:10:17.275Z] Copying: 712/1024 [MB] (24 MBps) [2024-12-08T21:10:18.213Z] Copying: 737/1024 [MB] (24 MBps) [2024-12-08T21:10:19.151Z] Copying: 761/1024 [MB] (24 MBps) [2024-12-08T21:10:20.530Z] Copying: 785/1024 [MB] (24 MBps) [2024-12-08T21:10:21.468Z] Copying: 810/1024 [MB] (24 MBps) [2024-12-08T21:10:22.406Z] Copying: 833/1024 [MB] (23 MBps) [2024-12-08T21:10:23.345Z] Copying: 857/1024 [MB] (24 MBps) [2024-12-08T21:10:24.282Z] Copying: 882/1024 [MB] (24 MBps) [2024-12-08T21:10:25.224Z] Copying: 905/1024 [MB] (23 MBps) [2024-12-08T21:10:26.161Z] Copying: 929/1024 [MB] (24 MBps) [2024-12-08T21:10:27.538Z] Copying: 954/1024 [MB] (24 MBps) [2024-12-08T21:10:28.476Z] Copying: 978/1024 [MB] (24 MBps) [2024-12-08T21:10:29.413Z] Copying: 1002/1024 [MB] (24 MBps) [2024-12-08T21:10:30.351Z] Copying: 1023/1024 [MB] (20 MBps) [2024-12-08T21:10:30.351Z] Copying: 1048560/1048576 [kB] (832 kBps) [2024-12-08T21:10:30.351Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-08 21:10:30.164496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.308 [2024-12-08 21:10:30.164625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:09.308 [2024-12-08 21:10:30.164660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:09.308 [2024-12-08 21:10:30.164679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.308 [2024-12-08 21:10:30.165645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:09.308 [2024-12-08 21:10:30.170629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.308 [2024-12-08 21:10:30.170660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:09.308 [2024-12-08 21:10:30.170685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:24:09.308 [2024-12-08 21:10:30.170695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.308 [2024-12-08 21:10:30.182651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.308 [2024-12-08 21:10:30.182686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:09.308 [2024-12-08 21:10:30.182701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.914 ms 00:24:09.308 [2024-12-08 21:10:30.182711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.308 [2024-12-08 21:10:30.202722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.308 [2024-12-08 21:10:30.202773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:09.308 [2024-12-08 21:10:30.202789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.992 ms 00:24:09.308 [2024-12-08 21:10:30.202800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.308 [2024-12-08 21:10:30.208189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.308 [2024-12-08 21:10:30.208216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:09.308 [2024-12-08 21:10:30.208229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.346 ms 00:24:09.308 [2024-12-08 21:10:30.208240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.308 [2024-12-08 21:10:30.232828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.308 [2024-12-08 21:10:30.232891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:09.308 [2024-12-08 21:10:30.232906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.526 ms 00:24:09.308 [2024-12-08 21:10:30.232917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.308 [2024-12-08 21:10:30.249766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.308 [2024-12-08 21:10:30.249821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:09.308 [2024-12-08 21:10:30.249836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.810 ms 00:24:09.308 [2024-12-08 21:10:30.249846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.568 [2024-12-08 21:10:30.364093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.569 [2024-12-08 21:10:30.364170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:09.569 [2024-12-08 21:10:30.364188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.206 ms 00:24:09.569 [2024-12-08 21:10:30.364201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.569 [2024-12-08 21:10:30.389862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.569 [2024-12-08 21:10:30.389910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:09.569 [2024-12-08 21:10:30.389924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.633 ms 00:24:09.569 [2024-12-08 21:10:30.389934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.569 [2024-12-08 21:10:30.414996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.569 [2024-12-08 21:10:30.415044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:09.569 [2024-12-08 21:10:30.415058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.024 ms 00:24:09.569 [2024-12-08 21:10:30.415068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.569 [2024-12-08 21:10:30.440435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.569 [2024-12-08 21:10:30.440470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:09.569 [2024-12-08 21:10:30.440484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.320 ms 00:24:09.569 [2024-12-08 21:10:30.440495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.569 [2024-12-08 21:10:30.465515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.569 [2024-12-08 21:10:30.465561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:09.569 [2024-12-08 21:10:30.465575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.896 ms 00:24:09.569 [2024-12-08 21:10:30.465585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.569 [2024-12-08 21:10:30.465622] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:09.569 [2024-12-08 21:10:30.465641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 00:24:09.569 [2024-12-08 21:10:30.465654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.465993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:09.569 [2024-12-08 21:10:30.466207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:09.570 [2024-12-08 21:10:30.466777] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:09.570 [2024-12-08 21:10:30.466794] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 63da4528-73f7-4366-8cb9-ed8d58b39646 00:24:09.570 [2024-12-08 21:10:30.466805] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 00:24:09.570 [2024-12-08 21:10:30.466815] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131008 00:24:09.570 [2024-12-08 21:10:30.466824] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 00:24:09.570 [2024-12-08 21:10:30.466845] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:24:09.570 [2024-12-08 21:10:30.466856] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:09.570 [2024-12-08 21:10:30.466866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:09.570 [2024-12-08 21:10:30.466876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:09.570 [2024-12-08 21:10:30.466885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:09.570 [2024-12-08 21:10:30.466895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:09.570 [2024-12-08 21:10:30.466905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.570 [2024-12-08 21:10:30.466916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:09.570 [2024-12-08 21:10:30.466926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:24:09.570 [2024-12-08 21:10:30.466936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.570 [2024-12-08 21:10:30.480837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.570 [2024-12-08 21:10:30.480883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:09.570 [2024-12-08 21:10:30.480897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.846 ms 00:24:09.570 [2024-12-08 21:10:30.480909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.570 [2024-12-08 21:10:30.481162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.570 [2024-12-08 21:10:30.481178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:09.570 [2024-12-08 21:10:30.481198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:24:09.570 [2024-12-08 21:10:30.481208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.570 [2024-12-08 21:10:30.517601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.571 [2024-12-08 21:10:30.517651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:09.571 [2024-12-08 21:10:30.517665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.571 [2024-12-08 21:10:30.517675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.571 [2024-12-08 21:10:30.517725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.571 [2024-12-08 21:10:30.517738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:09.571 [2024-12-08 21:10:30.517755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.571 [2024-12-08 21:10:30.517765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.571 [2024-12-08 21:10:30.517871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.571 [2024-12-08 21:10:30.517906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:09.571 [2024-12-08 21:10:30.517918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.571 [2024-12-08 21:10:30.517929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.571 [2024-12-08 21:10:30.517950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.571 [2024-12-08 21:10:30.517962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:09.571 [2024-12-08 21:10:30.517973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.571 [2024-12-08 21:10:30.517989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.571 [2024-12-08 21:10:30.592917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.571 [2024-12-08 21:10:30.592977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:09.571 [2024-12-08 21:10:30.592993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.571 [2024-12-08 21:10:30.593003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.837 [2024-12-08 21:10:30.624108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.837 [2024-12-08 21:10:30.624167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:09.837 [2024-12-08 21:10:30.624188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.837 [2024-12-08 21:10:30.624204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.837 [2024-12-08 21:10:30.624279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.837 [2024-12-08 21:10:30.624296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:09.837 [2024-12-08 21:10:30.624314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.837 [2024-12-08 21:10:30.624324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.837 [2024-12-08 21:10:30.624373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.837 [2024-12-08 21:10:30.624402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:09.837 [2024-12-08 21:10:30.624444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.837 [2024-12-08 21:10:30.624470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.837 [2024-12-08 21:10:30.624584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.837 [2024-12-08 21:10:30.624602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:09.837 [2024-12-08 21:10:30.624621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.837 [2024-12-08 21:10:30.624632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.837 [2024-12-08 21:10:30.624676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.837 [2024-12-08 21:10:30.624698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:09.837 [2024-12-08 21:10:30.624710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.837 [2024-12-08 21:10:30.624720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.837 [2024-12-08 21:10:30.624781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.837 [2024-12-08 21:10:30.624795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:09.837 [2024-12-08 21:10:30.624806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.837 [2024-12-08 21:10:30.624816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.837 [2024-12-08 21:10:30.624863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.837 [2024-12-08 21:10:30.624877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:09.837 [2024-12-08 21:10:30.624889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.837 [2024-12-08 21:10:30.624899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.837 [2024-12-08 21:10:30.625038] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 463.417 ms, result 0 00:24:11.305 00:24:11.305 00:24:11.305 21:10:32 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:13.210 21:10:33 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:13.210 [2024-12-08 21:10:33.857872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:13.210 [2024-12-08 21:10:33.858005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77193 ] 00:24:13.210 [2024-12-08 21:10:34.020317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.210 [2024-12-08 21:10:34.218829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.469 [2024-12-08 21:10:34.464711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:13.469 [2024-12-08 21:10:34.464798] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:13.730 [2024-12-08 21:10:34.613442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.730 [2024-12-08 21:10:34.613484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:13.730 [2024-12-08 21:10:34.613518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:13.730 [2024-12-08 21:10:34.613533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.730 [2024-12-08 21:10:34.613600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.730 [2024-12-08 21:10:34.613617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:13.730 [2024-12-08 21:10:34.613628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:13.730 [2024-12-08 21:10:34.613638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.613666] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:13.731 [2024-12-08 21:10:34.614568] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:13.731 [2024-12-08 21:10:34.614619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.731 [2024-12-08 21:10:34.614632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:13.731 [2024-12-08 21:10:34.614643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:24:13.731 [2024-12-08 21:10:34.614653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.615742] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:13.731 [2024-12-08 21:10:34.628506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.731 [2024-12-08 21:10:34.628542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:13.731 [2024-12-08 21:10:34.628572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.766 ms 00:24:13.731 [2024-12-08 21:10:34.628582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.628642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.731 [2024-12-08 21:10:34.628659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:13.731 [2024-12-08 21:10:34.628670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:13.731 [2024-12-08 21:10:34.628680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.632731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.731 [2024-12-08 21:10:34.632765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:13.731 [2024-12-08 21:10:34.632794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.985 ms 00:24:13.731 [2024-12-08 21:10:34.632804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.632896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.731 [2024-12-08 21:10:34.632913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:13.731 [2024-12-08 21:10:34.632924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:13.731 [2024-12-08 21:10:34.632933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.633003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.731 [2024-12-08 21:10:34.633049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:13.731 [2024-12-08 21:10:34.633077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:13.731 [2024-12-08 21:10:34.633086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.633127] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:13.731 [2024-12-08 21:10:34.636577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.731 [2024-12-08 21:10:34.636607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:13.731 [2024-12-08 21:10:34.636636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.467 ms 00:24:13.731 [2024-12-08 21:10:34.636646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.636698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.731 [2024-12-08 21:10:34.636713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:13.731 [2024-12-08 21:10:34.636724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:13.731 [2024-12-08 21:10:34.636737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.636760] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:13.731 [2024-12-08 21:10:34.636783] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:13.731 [2024-12-08 21:10:34.636817] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:13.731 [2024-12-08 21:10:34.636834] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:13.731 [2024-12-08 21:10:34.636948] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:13.731 [2024-12-08 21:10:34.636963] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:13.731 [2024-12-08 21:10:34.636980] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:13.731 [2024-12-08 21:10:34.636993] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:13.731 [2024-12-08 21:10:34.637005] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:13.731 [2024-12-08 21:10:34.637016] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:13.731 [2024-12-08 21:10:34.637025] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:13.731 [2024-12-08 21:10:34.637035] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:13.731 [2024-12-08 21:10:34.637044] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:13.731 [2024-12-08 21:10:34.637055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.731 [2024-12-08 21:10:34.637065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:13.731 [2024-12-08 21:10:34.637075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:24:13.731 [2024-12-08 21:10:34.637085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.637165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.731 [2024-12-08 21:10:34.637182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:13.731 [2024-12-08 21:10:34.637193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:13.731 [2024-12-08 21:10:34.637203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.731 [2024-12-08 21:10:34.637298] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:13.731 [2024-12-08 21:10:34.637316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:13.731 [2024-12-08 21:10:34.637327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:13.731 [2024-12-08 21:10:34.637338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.731 [2024-12-08 21:10:34.637348] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:13.731 [2024-12-08 21:10:34.637358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:13.731 [2024-12-08 21:10:34.637367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:13.731 [2024-12-08 21:10:34.637377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:13.731 [2024-12-08 21:10:34.637386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:13.731 [2024-12-08 21:10:34.637395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:13.731 [2024-12-08 21:10:34.637404] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:13.731 [2024-12-08 21:10:34.637414] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:13.731 [2024-12-08 21:10:34.637423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:13.731 [2024-12-08 21:10:34.637434] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:13.731 [2024-12-08 21:10:34.637444] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:13.731 [2024-12-08 21:10:34.637453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.731 [2024-12-08 21:10:34.637475] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:13.731 [2024-12-08 21:10:34.637485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:13.731 [2024-12-08 21:10:34.637494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.731 [2024-12-08 21:10:34.637503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:13.731 [2024-12-08 21:10:34.637512] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:13.731 [2024-12-08 21:10:34.637522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:13.731 [2024-12-08 21:10:34.637531] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:13.731 [2024-12-08 21:10:34.637540] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:13.731 [2024-12-08 21:10:34.637549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:13.731 [2024-12-08 21:10:34.637558] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:13.731 [2024-12-08 21:10:34.637567] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:13.731 [2024-12-08 21:10:34.637576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:13.731 [2024-12-08 21:10:34.637585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:13.731 [2024-12-08 21:10:34.637594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:13.731 [2024-12-08 21:10:34.637603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:13.731 [2024-12-08 21:10:34.637613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:13.731 [2024-12-08 21:10:34.637622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:13.731 [2024-12-08 21:10:34.637631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:13.731 [2024-12-08 21:10:34.637640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:13.731 [2024-12-08 21:10:34.637649] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:13.731 [2024-12-08 21:10:34.637658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:13.731 [2024-12-08 21:10:34.637667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:13.731 [2024-12-08 21:10:34.637676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:13.731 [2024-12-08 21:10:34.637686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:13.731 [2024-12-08 21:10:34.637700] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:13.731 [2024-12-08 21:10:34.637715] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:13.731 [2024-12-08 21:10:34.637725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:13.732 [2024-12-08 21:10:34.637735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.732 [2024-12-08 21:10:34.637745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:13.732 [2024-12-08 21:10:34.637756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:13.732 [2024-12-08 21:10:34.637765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:13.732 [2024-12-08 21:10:34.637775] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:13.732 [2024-12-08 21:10:34.637784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:13.732 [2024-12-08 21:10:34.637793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:13.732 [2024-12-08 21:10:34.637804] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:13.732 [2024-12-08 21:10:34.637817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:13.732 [2024-12-08 21:10:34.637827] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:13.732 [2024-12-08 21:10:34.637838] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:13.732 [2024-12-08 21:10:34.637848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:13.732 [2024-12-08 21:10:34.637858] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:13.732 [2024-12-08 21:10:34.637868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:13.732 [2024-12-08 21:10:34.637878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:13.732 [2024-12-08 21:10:34.637888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:13.732 [2024-12-08 21:10:34.637898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:13.732 [2024-12-08 21:10:34.637908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:13.732 [2024-12-08 21:10:34.637917] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:13.732 [2024-12-08 21:10:34.637927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:13.732 [2024-12-08 21:10:34.637938] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:13.732 [2024-12-08 21:10:34.637948] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:13.732 [2024-12-08 21:10:34.637958] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:13.732 [2024-12-08 21:10:34.637969] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:13.732 [2024-12-08 21:10:34.637980] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:13.732 [2024-12-08 21:10:34.637990] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:13.732 [2024-12-08 21:10:34.638000] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:13.732 [2024-12-08 21:10:34.638010] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:13.732 [2024-12-08 21:10:34.638021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.638031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:13.732 [2024-12-08 21:10:34.638041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:24:13.732 [2024-12-08 21:10:34.638051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.732 [2024-12-08 21:10:34.653319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.653357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:13.732 [2024-12-08 21:10:34.653389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.202 ms 00:24:13.732 [2024-12-08 21:10:34.653404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.732 [2024-12-08 21:10:34.653487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.653501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:13.732 [2024-12-08 21:10:34.653512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:13.732 [2024-12-08 21:10:34.653521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.732 [2024-12-08 21:10:34.700981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.701024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:13.732 [2024-12-08 21:10:34.701055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.400 ms 00:24:13.732 [2024-12-08 21:10:34.701065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.732 [2024-12-08 21:10:34.701129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.701146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:13.732 [2024-12-08 21:10:34.701157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:13.732 [2024-12-08 21:10:34.701168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.732 [2024-12-08 21:10:34.701549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.701576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:13.732 [2024-12-08 21:10:34.701590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:24:13.732 [2024-12-08 21:10:34.701606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.732 [2024-12-08 21:10:34.701747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.701764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:13.732 [2024-12-08 21:10:34.701776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:13.732 [2024-12-08 21:10:34.701786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.732 [2024-12-08 21:10:34.715808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.715858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:13.732 [2024-12-08 21:10:34.715889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.997 ms 00:24:13.732 [2024-12-08 21:10:34.715899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.732 [2024-12-08 21:10:34.729191] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:13.732 [2024-12-08 21:10:34.729243] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:13.732 [2024-12-08 21:10:34.729273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.729284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:13.732 [2024-12-08 21:10:34.729296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.265 ms 00:24:13.732 [2024-12-08 21:10:34.729305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.732 [2024-12-08 21:10:34.752287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.752353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:13.732 [2024-12-08 21:10:34.752369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.934 ms 00:24:13.732 [2024-12-08 21:10:34.752379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.732 [2024-12-08 21:10:34.764733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.732 [2024-12-08 21:10:34.764784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:13.732 [2024-12-08 21:10:34.764813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.306 ms 00:24:13.732 [2024-12-08 21:10:34.764823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.993 [2024-12-08 21:10:34.778298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.993 [2024-12-08 21:10:34.778349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:13.993 [2024-12-08 21:10:34.778391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.436 ms 00:24:13.993 [2024-12-08 21:10:34.778401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.993 [2024-12-08 21:10:34.778902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.993 [2024-12-08 21:10:34.778937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:13.993 [2024-12-08 21:10:34.778966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:24:13.993 [2024-12-08 21:10:34.778977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.993 [2024-12-08 21:10:34.848274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.993 [2024-12-08 21:10:34.848332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:13.993 [2024-12-08 21:10:34.848365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.273 ms 00:24:13.993 [2024-12-08 21:10:34.848375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.993 [2024-12-08 21:10:34.858338] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:13.993 [2024-12-08 21:10:34.860360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.993 [2024-12-08 21:10:34.860391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:13.993 [2024-12-08 21:10:34.860406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.919 ms 00:24:13.993 [2024-12-08 21:10:34.860422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.993 [2024-12-08 21:10:34.860523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.993 [2024-12-08 21:10:34.860555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:13.993 [2024-12-08 21:10:34.860566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:13.993 [2024-12-08 21:10:34.860577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.993 [2024-12-08 21:10:34.861651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.993 [2024-12-08 21:10:34.861700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:13.993 [2024-12-08 21:10:34.861729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:24:13.993 [2024-12-08 21:10:34.861739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.993 [2024-12-08 21:10:34.863380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.993 [2024-12-08 21:10:34.863427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:13.993 [2024-12-08 21:10:34.863470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:24:13.993 [2024-12-08 21:10:34.863480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.994 [2024-12-08 21:10:34.863525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.994 [2024-12-08 21:10:34.863538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:13.994 [2024-12-08 21:10:34.863555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:13.994 [2024-12-08 21:10:34.863565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.994 [2024-12-08 21:10:34.863602] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:13.994 [2024-12-08 21:10:34.863617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.994 [2024-12-08 21:10:34.863630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:13.994 [2024-12-08 21:10:34.863639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:13.994 [2024-12-08 21:10:34.863648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.994 [2024-12-08 21:10:34.890252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.994 [2024-12-08 21:10:34.890309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:13.994 [2024-12-08 21:10:34.890342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.582 ms 00:24:13.994 [2024-12-08 21:10:34.890354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.994 [2024-12-08 21:10:34.890470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.994 [2024-12-08 21:10:34.890503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:13.994 [2024-12-08 21:10:34.890515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:13.994 [2024-12-08 21:10:34.890527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.994 [2024-12-08 21:10:34.898669] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 283.467 ms, result 0 00:24:15.369  [2024-12-08T21:10:37.350Z] Copying: 916/1048576 [kB] (916 kBps) [2024-12-08T21:10:38.287Z] Copying: 5112/1048576 [kB] (4196 kBps) [2024-12-08T21:10:39.224Z] Copying: 29/1024 [MB] (24 MBps) [2024-12-08T21:10:40.162Z] Copying: 57/1024 [MB] (28 MBps) [2024-12-08T21:10:41.100Z] Copying: 84/1024 [MB] (27 MBps) [2024-12-08T21:10:42.475Z] Copying: 112/1024 [MB] (27 MBps) [2024-12-08T21:10:43.409Z] Copying: 139/1024 [MB] (27 MBps) [2024-12-08T21:10:44.346Z] Copying: 167/1024 [MB] (27 MBps) [2024-12-08T21:10:45.295Z] Copying: 195/1024 [MB] (27 MBps) [2024-12-08T21:10:46.228Z] Copying: 223/1024 [MB] (27 MBps) [2024-12-08T21:10:47.161Z] Copying: 251/1024 [MB] (28 MBps) [2024-12-08T21:10:48.098Z] Copying: 278/1024 [MB] (27 MBps) [2024-12-08T21:10:49.480Z] Copying: 305/1024 [MB] (27 MBps) [2024-12-08T21:10:50.419Z] Copying: 332/1024 [MB] (26 MBps) [2024-12-08T21:10:51.358Z] Copying: 359/1024 [MB] (27 MBps) [2024-12-08T21:10:52.296Z] Copying: 388/1024 [MB] (28 MBps) [2024-12-08T21:10:53.236Z] Copying: 416/1024 [MB] (27 MBps) [2024-12-08T21:10:54.175Z] Copying: 444/1024 [MB] (28 MBps) [2024-12-08T21:10:55.111Z] Copying: 472/1024 [MB] (28 MBps) [2024-12-08T21:10:56.487Z] Copying: 500/1024 [MB] (28 MBps) [2024-12-08T21:10:57.423Z] Copying: 528/1024 [MB] (28 MBps) [2024-12-08T21:10:58.358Z] Copying: 557/1024 [MB] (28 MBps) [2024-12-08T21:10:59.293Z] Copying: 585/1024 [MB] (28 MBps) [2024-12-08T21:11:00.228Z] Copying: 613/1024 [MB] (28 MBps) [2024-12-08T21:11:01.181Z] Copying: 641/1024 [MB] (27 MBps) [2024-12-08T21:11:02.118Z] Copying: 669/1024 [MB] (27 MBps) [2024-12-08T21:11:03.107Z] Copying: 697/1024 [MB] (28 MBps) [2024-12-08T21:11:04.075Z] Copying: 725/1024 [MB] (28 MBps) [2024-12-08T21:11:05.452Z] Copying: 753/1024 [MB] (27 MBps) [2024-12-08T21:11:06.388Z] Copying: 781/1024 [MB] (28 MBps) [2024-12-08T21:11:07.326Z] Copying: 809/1024 [MB] (28 MBps) [2024-12-08T21:11:08.263Z] Copying: 836/1024 [MB] (27 MBps) [2024-12-08T21:11:09.201Z] Copying: 864/1024 [MB] (27 MBps) [2024-12-08T21:11:10.139Z] Copying: 890/1024 [MB] (26 MBps) [2024-12-08T21:11:11.077Z] Copying: 918/1024 [MB] (27 MBps) [2024-12-08T21:11:12.457Z] Copying: 945/1024 [MB] (27 MBps) [2024-12-08T21:11:13.395Z] Copying: 972/1024 [MB] (27 MBps) [2024-12-08T21:11:13.963Z] Copying: 1000/1024 [MB] (27 MBps) [2024-12-08T21:11:13.963Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-08 21:11:13.925049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.920 [2024-12-08 21:11:13.925123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:52.920 [2024-12-08 21:11:13.925146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:52.920 [2024-12-08 21:11:13.925160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.920 [2024-12-08 21:11:13.925192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:52.920 [2024-12-08 21:11:13.928581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.920 [2024-12-08 21:11:13.928617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:52.920 [2024-12-08 21:11:13.928633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:24:52.920 [2024-12-08 21:11:13.928645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.920 [2024-12-08 21:11:13.928921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.920 [2024-12-08 21:11:13.928952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:52.920 [2024-12-08 21:11:13.928967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:24:52.920 [2024-12-08 21:11:13.928979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.920 [2024-12-08 21:11:13.939538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.920 [2024-12-08 21:11:13.939594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:52.920 [2024-12-08 21:11:13.939613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.535 ms 00:24:52.920 [2024-12-08 21:11:13.939626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.920 [2024-12-08 21:11:13.947484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.920 [2024-12-08 21:11:13.947541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:52.920 [2024-12-08 21:11:13.947573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.832 ms 00:24:52.920 [2024-12-08 21:11:13.947584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.180 [2024-12-08 21:11:13.974469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.180 [2024-12-08 21:11:13.974524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:53.180 [2024-12-08 21:11:13.974557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.814 ms 00:24:53.180 [2024-12-08 21:11:13.974568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.180 [2024-12-08 21:11:13.989812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.180 [2024-12-08 21:11:13.989880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:53.180 [2024-12-08 21:11:13.989913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.219 ms 00:24:53.180 [2024-12-08 21:11:13.989924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.180 [2024-12-08 21:11:13.994248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.180 [2024-12-08 21:11:13.994305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:53.180 [2024-12-08 21:11:13.994338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.298 ms 00:24:53.180 [2024-12-08 21:11:13.994357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.180 [2024-12-08 21:11:14.020259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.180 [2024-12-08 21:11:14.020314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:53.180 [2024-12-08 21:11:14.020346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.858 ms 00:24:53.180 [2024-12-08 21:11:14.020357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.180 [2024-12-08 21:11:14.045283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.180 [2024-12-08 21:11:14.045334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:53.180 [2024-12-08 21:11:14.045365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.902 ms 00:24:53.180 [2024-12-08 21:11:14.045389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.180 [2024-12-08 21:11:14.070004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.180 [2024-12-08 21:11:14.070038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:53.180 [2024-12-08 21:11:14.070068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.591 ms 00:24:53.180 [2024-12-08 21:11:14.070078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.180 [2024-12-08 21:11:14.094271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.180 [2024-12-08 21:11:14.094307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:53.180 [2024-12-08 21:11:14.094338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.123 ms 00:24:53.180 [2024-12-08 21:11:14.094348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.180 [2024-12-08 21:11:14.094371] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:53.180 [2024-12-08 21:11:14.094389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:53.180 [2024-12-08 21:11:14.094402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 4096 / 261120 wr_cnt: 1 state: open 00:24:53.180 [2024-12-08 21:11:14.094412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:53.180 [2024-12-08 21:11:14.094422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:53.180 [2024-12-08 21:11:14.094432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:53.180 [2024-12-08 21:11:14.094441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:53.180 [2024-12-08 21:11:14.094450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:53.180 [2024-12-08 21:11:14.094460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:53.180 [2024-12-08 21:11:14.094470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.094999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:53.181 [2024-12-08 21:11:14.095465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:53.182 [2024-12-08 21:11:14.095490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:53.182 [2024-12-08 21:11:14.095500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:53.182 [2024-12-08 21:11:14.095518] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:53.182 [2024-12-08 21:11:14.095528] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 63da4528-73f7-4366-8cb9-ed8d58b39646 00:24:53.182 [2024-12-08 21:11:14.095538] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 265216 00:24:53.182 [2024-12-08 21:11:14.095555] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137152 00:24:53.182 [2024-12-08 21:11:14.095564] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135168 00:24:53.182 [2024-12-08 21:11:14.095575] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:24:53.182 [2024-12-08 21:11:14.095585] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:53.182 [2024-12-08 21:11:14.095596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:53.182 [2024-12-08 21:11:14.095606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:53.182 [2024-12-08 21:11:14.095614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:53.182 [2024-12-08 21:11:14.095634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:53.182 [2024-12-08 21:11:14.095644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.182 [2024-12-08 21:11:14.095654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:53.182 [2024-12-08 21:11:14.095665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.275 ms 00:24:53.182 [2024-12-08 21:11:14.095675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.182 [2024-12-08 21:11:14.109217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.182 [2024-12-08 21:11:14.109264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:53.182 [2024-12-08 21:11:14.109294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.505 ms 00:24:53.182 [2024-12-08 21:11:14.109304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.182 [2024-12-08 21:11:14.109549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.182 [2024-12-08 21:11:14.109597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:53.182 [2024-12-08 21:11:14.109610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:24:53.182 [2024-12-08 21:11:14.109628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.182 [2024-12-08 21:11:14.145024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.182 [2024-12-08 21:11:14.145062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:53.182 [2024-12-08 21:11:14.145105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.182 [2024-12-08 21:11:14.145117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.182 [2024-12-08 21:11:14.145167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.182 [2024-12-08 21:11:14.145181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:53.182 [2024-12-08 21:11:14.145192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.182 [2024-12-08 21:11:14.145202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.182 [2024-12-08 21:11:14.145289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.182 [2024-12-08 21:11:14.145338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:53.182 [2024-12-08 21:11:14.145366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.182 [2024-12-08 21:11:14.145377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.182 [2024-12-08 21:11:14.145399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.182 [2024-12-08 21:11:14.145413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:53.182 [2024-12-08 21:11:14.145423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.182 [2024-12-08 21:11:14.145433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.441 [2024-12-08 21:11:14.222261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.441 [2024-12-08 21:11:14.222332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:53.441 [2024-12-08 21:11:14.222364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.441 [2024-12-08 21:11:14.222375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.441 [2024-12-08 21:11:14.253225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.441 [2024-12-08 21:11:14.253261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:53.441 [2024-12-08 21:11:14.253291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.441 [2024-12-08 21:11:14.253301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.441 [2024-12-08 21:11:14.253381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.441 [2024-12-08 21:11:14.253399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:53.441 [2024-12-08 21:11:14.253410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.442 [2024-12-08 21:11:14.253419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.442 [2024-12-08 21:11:14.253468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.442 [2024-12-08 21:11:14.253483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:53.442 [2024-12-08 21:11:14.253509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.442 [2024-12-08 21:11:14.253534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.442 [2024-12-08 21:11:14.253659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.442 [2024-12-08 21:11:14.253682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:53.442 [2024-12-08 21:11:14.253694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.442 [2024-12-08 21:11:14.253704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.442 [2024-12-08 21:11:14.253758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.442 [2024-12-08 21:11:14.253775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:53.442 [2024-12-08 21:11:14.253787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.442 [2024-12-08 21:11:14.253797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.442 [2024-12-08 21:11:14.253837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.442 [2024-12-08 21:11:14.253858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:53.442 [2024-12-08 21:11:14.253870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.442 [2024-12-08 21:11:14.253879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.442 [2024-12-08 21:11:14.253927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.442 [2024-12-08 21:11:14.253943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:53.442 [2024-12-08 21:11:14.253953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.442 [2024-12-08 21:11:14.253964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.442 [2024-12-08 21:11:14.254093] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.017 ms, result 0 00:24:54.378 00:24:54.378 00:24:54.378 21:11:15 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:56.281 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:56.281 21:11:16 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:56.281 [2024-12-08 21:11:16.975933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:56.281 [2024-12-08 21:11:16.976048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77630 ] 00:24:56.281 [2024-12-08 21:11:17.133531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.539 [2024-12-08 21:11:17.329956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.799 [2024-12-08 21:11:17.584772] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:56.799 [2024-12-08 21:11:17.584890] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:56.799 [2024-12-08 21:11:17.734734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.799 [2024-12-08 21:11:17.734778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:56.799 [2024-12-08 21:11:17.734811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:56.799 [2024-12-08 21:11:17.734826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.799 [2024-12-08 21:11:17.734886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.799 [2024-12-08 21:11:17.734903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:56.799 [2024-12-08 21:11:17.734914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:56.799 [2024-12-08 21:11:17.734924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.799 [2024-12-08 21:11:17.734951] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:56.799 [2024-12-08 21:11:17.735881] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:56.799 [2024-12-08 21:11:17.735932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.799 [2024-12-08 21:11:17.735945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:56.799 [2024-12-08 21:11:17.735956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:24:56.799 [2024-12-08 21:11:17.735966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.799 [2024-12-08 21:11:17.737200] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:56.799 [2024-12-08 21:11:17.750342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.799 [2024-12-08 21:11:17.750378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:56.799 [2024-12-08 21:11:17.750409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.143 ms 00:24:56.799 [2024-12-08 21:11:17.750419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.799 [2024-12-08 21:11:17.750478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.799 [2024-12-08 21:11:17.750495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:56.800 [2024-12-08 21:11:17.750505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:56.800 [2024-12-08 21:11:17.750515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.800 [2024-12-08 21:11:17.754729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.800 [2024-12-08 21:11:17.754763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:56.800 [2024-12-08 21:11:17.754792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.139 ms 00:24:56.800 [2024-12-08 21:11:17.754802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.800 [2024-12-08 21:11:17.754892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.800 [2024-12-08 21:11:17.754909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:56.800 [2024-12-08 21:11:17.754920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:56.800 [2024-12-08 21:11:17.754929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.800 [2024-12-08 21:11:17.754990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.800 [2024-12-08 21:11:17.755021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:56.800 [2024-12-08 21:11:17.755048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:56.800 [2024-12-08 21:11:17.755075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.800 [2024-12-08 21:11:17.755130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:56.800 [2024-12-08 21:11:17.758854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.800 [2024-12-08 21:11:17.758902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:56.800 [2024-12-08 21:11:17.758930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.740 ms 00:24:56.800 [2024-12-08 21:11:17.758940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.800 [2024-12-08 21:11:17.758982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.800 [2024-12-08 21:11:17.758997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:56.800 [2024-12-08 21:11:17.759008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:56.800 [2024-12-08 21:11:17.759020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.800 [2024-12-08 21:11:17.759058] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:56.800 [2024-12-08 21:11:17.759084] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:56.800 [2024-12-08 21:11:17.759135] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:56.800 [2024-12-08 21:11:17.759185] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:56.800 [2024-12-08 21:11:17.759257] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:56.800 [2024-12-08 21:11:17.759271] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:56.800 [2024-12-08 21:11:17.759288] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:56.800 [2024-12-08 21:11:17.759302] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:56.800 [2024-12-08 21:11:17.759314] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:56.800 [2024-12-08 21:11:17.759324] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:56.800 [2024-12-08 21:11:17.759334] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:56.800 [2024-12-08 21:11:17.759343] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:56.800 [2024-12-08 21:11:17.759352] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:56.800 [2024-12-08 21:11:17.759363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.800 [2024-12-08 21:11:17.759373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:56.800 [2024-12-08 21:11:17.759384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:24:56.800 [2024-12-08 21:11:17.759394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.800 [2024-12-08 21:11:17.759460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.800 [2024-12-08 21:11:17.759474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:56.800 [2024-12-08 21:11:17.759484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:56.800 [2024-12-08 21:11:17.759494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.800 [2024-12-08 21:11:17.759573] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:56.800 [2024-12-08 21:11:17.759587] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:56.800 [2024-12-08 21:11:17.759599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:56.800 [2024-12-08 21:11:17.759609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.800 [2024-12-08 21:11:17.759620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:56.800 [2024-12-08 21:11:17.759629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:56.800 [2024-12-08 21:11:17.759638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:56.800 [2024-12-08 21:11:17.759648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:56.800 [2024-12-08 21:11:17.759657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:56.800 [2024-12-08 21:11:17.759666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:56.800 [2024-12-08 21:11:17.759675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:56.800 [2024-12-08 21:11:17.759685] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:56.800 [2024-12-08 21:11:17.759695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:56.800 [2024-12-08 21:11:17.759708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:56.800 [2024-12-08 21:11:17.759717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:56.800 [2024-12-08 21:11:17.759726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.800 [2024-12-08 21:11:17.759748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:56.800 [2024-12-08 21:11:17.759758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:56.800 [2024-12-08 21:11:17.759767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.800 [2024-12-08 21:11:17.759776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:56.800 [2024-12-08 21:11:17.759785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:56.800 [2024-12-08 21:11:17.759795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:56.800 [2024-12-08 21:11:17.759803] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:56.800 [2024-12-08 21:11:17.759812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:56.800 [2024-12-08 21:11:17.759821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:56.800 [2024-12-08 21:11:17.759831] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:56.800 [2024-12-08 21:11:17.759840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:56.800 [2024-12-08 21:11:17.759849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:56.800 [2024-12-08 21:11:17.759858] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:56.800 [2024-12-08 21:11:17.759867] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:56.800 [2024-12-08 21:11:17.759876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:56.800 [2024-12-08 21:11:17.759885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:56.800 [2024-12-08 21:11:17.759894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:56.800 [2024-12-08 21:11:17.759903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:56.800 [2024-12-08 21:11:17.759912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:56.800 [2024-12-08 21:11:17.759921] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:56.800 [2024-12-08 21:11:17.759930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:56.800 [2024-12-08 21:11:17.759939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:56.800 [2024-12-08 21:11:17.759948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:56.800 [2024-12-08 21:11:17.759957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:56.800 [2024-12-08 21:11:17.759966] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:56.800 [2024-12-08 21:11:17.759980] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:56.800 [2024-12-08 21:11:17.759990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:56.800 [2024-12-08 21:11:17.760002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.800 [2024-12-08 21:11:17.760012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:56.800 [2024-12-08 21:11:17.760022] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:56.800 [2024-12-08 21:11:17.760031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:56.800 [2024-12-08 21:11:17.760040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:56.800 [2024-12-08 21:11:17.760049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:56.800 [2024-12-08 21:11:17.760059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:56.800 [2024-12-08 21:11:17.760070] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:56.800 [2024-12-08 21:11:17.760097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:56.800 [2024-12-08 21:11:17.760152] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:56.800 [2024-12-08 21:11:17.760165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:56.800 [2024-12-08 21:11:17.760176] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:56.800 [2024-12-08 21:11:17.760187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:56.800 [2024-12-08 21:11:17.760198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:56.800 [2024-12-08 21:11:17.760209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:56.800 [2024-12-08 21:11:17.760220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:56.801 [2024-12-08 21:11:17.760231] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:56.801 [2024-12-08 21:11:17.760242] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:56.801 [2024-12-08 21:11:17.760253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:56.801 [2024-12-08 21:11:17.760264] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:56.801 [2024-12-08 21:11:17.760275] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:56.801 [2024-12-08 21:11:17.760287] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:56.801 [2024-12-08 21:11:17.760298] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:56.801 [2024-12-08 21:11:17.760309] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:56.801 [2024-12-08 21:11:17.760321] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:56.801 [2024-12-08 21:11:17.760332] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:56.801 [2024-12-08 21:11:17.760344] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:56.801 [2024-12-08 21:11:17.760355] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:56.801 [2024-12-08 21:11:17.760367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.801 [2024-12-08 21:11:17.760378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:56.801 [2024-12-08 21:11:17.760389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:24:56.801 [2024-12-08 21:11:17.760400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.801 [2024-12-08 21:11:17.775655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.801 [2024-12-08 21:11:17.775694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:56.801 [2024-12-08 21:11:17.775726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.147 ms 00:24:56.801 [2024-12-08 21:11:17.775741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.801 [2024-12-08 21:11:17.775822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.801 [2024-12-08 21:11:17.775836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:56.801 [2024-12-08 21:11:17.775846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:56.801 [2024-12-08 21:11:17.775855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.801 [2024-12-08 21:11:17.817260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.801 [2024-12-08 21:11:17.817324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:56.801 [2024-12-08 21:11:17.817357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.347 ms 00:24:56.801 [2024-12-08 21:11:17.817369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.801 [2024-12-08 21:11:17.817469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.801 [2024-12-08 21:11:17.817486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:56.801 [2024-12-08 21:11:17.817497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:56.801 [2024-12-08 21:11:17.817507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.801 [2024-12-08 21:11:17.817908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.801 [2024-12-08 21:11:17.817937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:56.801 [2024-12-08 21:11:17.817952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:24:56.801 [2024-12-08 21:11:17.817968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.801 [2024-12-08 21:11:17.818155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.801 [2024-12-08 21:11:17.818180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:56.801 [2024-12-08 21:11:17.818193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:24:56.801 [2024-12-08 21:11:17.818204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.801 [2024-12-08 21:11:17.834047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.801 [2024-12-08 21:11:17.834110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:56.801 [2024-12-08 21:11:17.834127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.815 ms 00:24:56.801 [2024-12-08 21:11:17.834138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.060 [2024-12-08 21:11:17.849274] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:57.060 [2024-12-08 21:11:17.849326] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:57.060 [2024-12-08 21:11:17.849357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.060 [2024-12-08 21:11:17.849368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:57.060 [2024-12-08 21:11:17.849379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.096 ms 00:24:57.060 [2024-12-08 21:11:17.849389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.060 [2024-12-08 21:11:17.872993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.060 [2024-12-08 21:11:17.873046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:57.060 [2024-12-08 21:11:17.873062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.562 ms 00:24:57.060 [2024-12-08 21:11:17.873080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.060 [2024-12-08 21:11:17.885864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.060 [2024-12-08 21:11:17.885897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:57.060 [2024-12-08 21:11:17.885926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.740 ms 00:24:57.060 [2024-12-08 21:11:17.885935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.060 [2024-12-08 21:11:17.898085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.060 [2024-12-08 21:11:17.898128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:57.060 [2024-12-08 21:11:17.898157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.111 ms 00:24:57.060 [2024-12-08 21:11:17.898167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.060 [2024-12-08 21:11:17.898607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.060 [2024-12-08 21:11:17.898637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:57.060 [2024-12-08 21:11:17.898650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:24:57.060 [2024-12-08 21:11:17.898661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.060 [2024-12-08 21:11:17.957600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.060 [2024-12-08 21:11:17.957655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:57.060 [2024-12-08 21:11:17.957689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.916 ms 00:24:57.060 [2024-12-08 21:11:17.957699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.060 [2024-12-08 21:11:17.967605] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:57.060 [2024-12-08 21:11:17.969635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.060 [2024-12-08 21:11:17.969680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:57.060 [2024-12-08 21:11:17.969710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.879 ms 00:24:57.060 [2024-12-08 21:11:17.969725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.060 [2024-12-08 21:11:17.969805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.060 [2024-12-08 21:11:17.969823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:57.060 [2024-12-08 21:11:17.969834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:57.060 [2024-12-08 21:11:17.969845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.060 [2024-12-08 21:11:17.970480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.060 [2024-12-08 21:11:17.970523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:57.061 [2024-12-08 21:11:17.970536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:24:57.061 [2024-12-08 21:11:17.970547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.061 [2024-12-08 21:11:17.972357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.061 [2024-12-08 21:11:17.972407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:57.061 [2024-12-08 21:11:17.972452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:24:57.061 [2024-12-08 21:11:17.972462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.061 [2024-12-08 21:11:17.972512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.061 [2024-12-08 21:11:17.972525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:57.061 [2024-12-08 21:11:17.972543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:57.061 [2024-12-08 21:11:17.972567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.061 [2024-12-08 21:11:17.972605] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:57.061 [2024-12-08 21:11:17.972620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.061 [2024-12-08 21:11:17.972633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:57.061 [2024-12-08 21:11:17.972642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:57.061 [2024-12-08 21:11:17.972652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.061 [2024-12-08 21:11:17.997059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.061 [2024-12-08 21:11:17.997106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:57.061 [2024-12-08 21:11:17.997137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.350 ms 00:24:57.061 [2024-12-08 21:11:17.997147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.061 [2024-12-08 21:11:17.997224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.061 [2024-12-08 21:11:17.997240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:57.061 [2024-12-08 21:11:17.997250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:57.061 [2024-12-08 21:11:17.997260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.061 [2024-12-08 21:11:17.998563] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.258 ms, result 0 00:24:58.438  [2024-12-08T21:11:20.419Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-08T21:11:21.357Z] Copying: 46/1024 [MB] (22 MBps) [2024-12-08T21:11:22.296Z] Copying: 68/1024 [MB] (22 MBps) [2024-12-08T21:11:23.235Z] Copying: 91/1024 [MB] (22 MBps) [2024-12-08T21:11:24.173Z] Copying: 113/1024 [MB] (22 MBps) [2024-12-08T21:11:25.550Z] Copying: 135/1024 [MB] (22 MBps) [2024-12-08T21:11:26.484Z] Copying: 157/1024 [MB] (22 MBps) [2024-12-08T21:11:27.421Z] Copying: 180/1024 [MB] (22 MBps) [2024-12-08T21:11:28.357Z] Copying: 201/1024 [MB] (21 MBps) [2024-12-08T21:11:29.294Z] Copying: 223/1024 [MB] (21 MBps) [2024-12-08T21:11:30.228Z] Copying: 246/1024 [MB] (22 MBps) [2024-12-08T21:11:31.605Z] Copying: 268/1024 [MB] (21 MBps) [2024-12-08T21:11:32.173Z] Copying: 290/1024 [MB] (22 MBps) [2024-12-08T21:11:33.547Z] Copying: 313/1024 [MB] (22 MBps) [2024-12-08T21:11:34.486Z] Copying: 335/1024 [MB] (22 MBps) [2024-12-08T21:11:35.450Z] Copying: 357/1024 [MB] (22 MBps) [2024-12-08T21:11:36.396Z] Copying: 380/1024 [MB] (23 MBps) [2024-12-08T21:11:37.329Z] Copying: 403/1024 [MB] (23 MBps) [2024-12-08T21:11:38.266Z] Copying: 426/1024 [MB] (22 MBps) [2024-12-08T21:11:39.203Z] Copying: 449/1024 [MB] (23 MBps) [2024-12-08T21:11:40.582Z] Copying: 472/1024 [MB] (22 MBps) [2024-12-08T21:11:41.520Z] Copying: 495/1024 [MB] (22 MBps) [2024-12-08T21:11:42.458Z] Copying: 518/1024 [MB] (22 MBps) [2024-12-08T21:11:43.396Z] Copying: 540/1024 [MB] (22 MBps) [2024-12-08T21:11:44.334Z] Copying: 562/1024 [MB] (21 MBps) [2024-12-08T21:11:45.278Z] Copying: 585/1024 [MB] (22 MBps) [2024-12-08T21:11:46.219Z] Copying: 608/1024 [MB] (22 MBps) [2024-12-08T21:11:47.598Z] Copying: 631/1024 [MB] (22 MBps) [2024-12-08T21:11:48.169Z] Copying: 653/1024 [MB] (22 MBps) [2024-12-08T21:11:49.549Z] Copying: 676/1024 [MB] (22 MBps) [2024-12-08T21:11:50.487Z] Copying: 699/1024 [MB] (22 MBps) [2024-12-08T21:11:51.423Z] Copying: 722/1024 [MB] (22 MBps) [2024-12-08T21:11:52.359Z] Copying: 744/1024 [MB] (22 MBps) [2024-12-08T21:11:53.293Z] Copying: 766/1024 [MB] (22 MBps) [2024-12-08T21:11:54.228Z] Copying: 789/1024 [MB] (22 MBps) [2024-12-08T21:11:55.604Z] Copying: 812/1024 [MB] (22 MBps) [2024-12-08T21:11:56.170Z] Copying: 835/1024 [MB] (22 MBps) [2024-12-08T21:11:57.548Z] Copying: 858/1024 [MB] (23 MBps) [2024-12-08T21:11:58.485Z] Copying: 880/1024 [MB] (22 MBps) [2024-12-08T21:11:59.438Z] Copying: 903/1024 [MB] (22 MBps) [2024-12-08T21:12:00.376Z] Copying: 926/1024 [MB] (22 MBps) [2024-12-08T21:12:01.314Z] Copying: 949/1024 [MB] (22 MBps) [2024-12-08T21:12:02.250Z] Copying: 972/1024 [MB] (22 MBps) [2024-12-08T21:12:03.187Z] Copying: 995/1024 [MB] (22 MBps) [2024-12-08T21:12:03.753Z] Copying: 1017/1024 [MB] (22 MBps) [2024-12-08T21:12:03.753Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-08 21:12:03.684156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.710 [2024-12-08 21:12:03.684225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:42.710 [2024-12-08 21:12:03.684245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:42.710 [2024-12-08 21:12:03.684258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.710 [2024-12-08 21:12:03.684289] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:42.710 [2024-12-08 21:12:03.689536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.710 [2024-12-08 21:12:03.689589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:42.710 [2024-12-08 21:12:03.689619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.223 ms 00:25:42.710 [2024-12-08 21:12:03.689629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.710 [2024-12-08 21:12:03.689898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.710 [2024-12-08 21:12:03.689917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:42.710 [2024-12-08 21:12:03.689930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:25:42.710 [2024-12-08 21:12:03.689940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.710 [2024-12-08 21:12:03.693395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.710 [2024-12-08 21:12:03.693424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:42.710 [2024-12-08 21:12:03.693443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.436 ms 00:25:42.710 [2024-12-08 21:12:03.693454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.710 [2024-12-08 21:12:03.700575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.710 [2024-12-08 21:12:03.700620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:42.710 [2024-12-08 21:12:03.700648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.100 ms 00:25:42.710 [2024-12-08 21:12:03.700658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.710 [2024-12-08 21:12:03.726086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.710 [2024-12-08 21:12:03.726139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:42.710 [2024-12-08 21:12:03.726170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.361 ms 00:25:42.710 [2024-12-08 21:12:03.726179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.710 [2024-12-08 21:12:03.741039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.710 [2024-12-08 21:12:03.741111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:42.710 [2024-12-08 21:12:03.741128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.821 ms 00:25:42.710 [2024-12-08 21:12:03.741146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.710 [2024-12-08 21:12:03.745503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.710 [2024-12-08 21:12:03.745542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:42.710 [2024-12-08 21:12:03.745573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.331 ms 00:25:42.710 [2024-12-08 21:12:03.745584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.969 [2024-12-08 21:12:03.771984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.969 [2024-12-08 21:12:03.772034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:42.969 [2024-12-08 21:12:03.772048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.380 ms 00:25:42.969 [2024-12-08 21:12:03.772057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.969 [2024-12-08 21:12:03.797049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.969 [2024-12-08 21:12:03.797129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:42.969 [2024-12-08 21:12:03.797158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.940 ms 00:25:42.969 [2024-12-08 21:12:03.797168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.969 [2024-12-08 21:12:03.821862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.969 [2024-12-08 21:12:03.821897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:42.969 [2024-12-08 21:12:03.821927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.656 ms 00:25:42.969 [2024-12-08 21:12:03.821937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.969 [2024-12-08 21:12:03.847728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.969 [2024-12-08 21:12:03.847781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:42.969 [2024-12-08 21:12:03.847796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.718 ms 00:25:42.969 [2024-12-08 21:12:03.847806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.969 [2024-12-08 21:12:03.847843] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:42.969 [2024-12-08 21:12:03.847869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:42.970 [2024-12-08 21:12:03.847882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 4096 / 261120 wr_cnt: 1 state: open 00:25:42.970 [2024-12-08 21:12:03.847892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.847907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.847917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.847927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.847937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.847946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.847957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.847967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.847977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.847987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.847997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:42.970 [2024-12-08 21:12:03.848896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.848906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.848916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.848926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.848937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.848948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.848957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.848967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.848978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.848988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.848998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:42.971 [2024-12-08 21:12:03.849174] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:42.971 [2024-12-08 21:12:03.849185] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 63da4528-73f7-4366-8cb9-ed8d58b39646 00:25:42.971 [2024-12-08 21:12:03.849195] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 265216 00:25:42.971 [2024-12-08 21:12:03.849215] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:42.971 [2024-12-08 21:12:03.849226] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:42.971 [2024-12-08 21:12:03.849236] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:42.971 [2024-12-08 21:12:03.849246] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:42.971 [2024-12-08 21:12:03.849257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:42.971 [2024-12-08 21:12:03.849267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:42.971 [2024-12-08 21:12:03.849287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:42.971 [2024-12-08 21:12:03.849296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:42.971 [2024-12-08 21:12:03.849307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.971 [2024-12-08 21:12:03.849317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:42.971 [2024-12-08 21:12:03.849332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:25:42.971 [2024-12-08 21:12:03.849343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:03.863840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.971 [2024-12-08 21:12:03.863870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:42.971 [2024-12-08 21:12:03.863900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.444 ms 00:25:42.971 [2024-12-08 21:12:03.863910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:03.864191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.971 [2024-12-08 21:12:03.864210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:42.971 [2024-12-08 21:12:03.864222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:25:42.971 [2024-12-08 21:12:03.864233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:03.901762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:03.901800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.971 [2024-12-08 21:12:03.901830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:03.901839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:03.901894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:03.901907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.971 [2024-12-08 21:12:03.901916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:03.901925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:03.902001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:03.902018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.971 [2024-12-08 21:12:03.902044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:03.902069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:03.902089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:03.902106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.971 [2024-12-08 21:12:03.902130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:03.902143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:03.976712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:03.976787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.971 [2024-12-08 21:12:03.976818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:03.976828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:04.007560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:04.007603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.971 [2024-12-08 21:12:04.007633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:04.007643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:04.007714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:04.007730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.971 [2024-12-08 21:12:04.007741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:04.007751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:04.007799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:04.007827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.971 [2024-12-08 21:12:04.007858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:04.007867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:04.007987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:04.008005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.971 [2024-12-08 21:12:04.008015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:04.008025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:04.008066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:04.008082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:42.971 [2024-12-08 21:12:04.008092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:04.008160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:04.008207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:04.008222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.971 [2024-12-08 21:12:04.008234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:04.008244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:04.008295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.971 [2024-12-08 21:12:04.008310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.971 [2024-12-08 21:12:04.008328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.971 [2024-12-08 21:12:04.008339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.971 [2024-12-08 21:12:04.008543] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 324.351 ms, result 0 00:25:43.908 00:25:43.908 00:25:43.908 21:12:04 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:45.849 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:45.849 21:12:06 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:45.849 21:12:06 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:45.849 21:12:06 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:45.849 21:12:06 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:45.849 21:12:06 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:46.118 21:12:06 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:46.118 21:12:06 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:46.118 21:12:06 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75651 00:25:46.118 21:12:06 -- common/autotest_common.sh@936 -- # '[' -z 75651 ']' 00:25:46.118 21:12:06 -- common/autotest_common.sh@940 -- # kill -0 75651 00:25:46.118 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75651) - No such process 00:25:46.118 Process with pid 75651 is not found 00:25:46.118 21:12:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75651 is not found' 00:25:46.118 21:12:06 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:46.118 21:12:07 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:46.118 Remove shared memory files 00:25:46.118 21:12:07 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:46.118 21:12:07 -- ftl/common.sh@205 -- # rm -f rm -f 00:25:46.118 21:12:07 -- ftl/common.sh@206 -- # rm -f rm -f 00:25:46.118 21:12:07 -- ftl/common.sh@207 -- # rm -f rm -f 00:25:46.118 21:12:07 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:46.118 21:12:07 -- ftl/common.sh@209 -- # rm -f rm -f 00:25:46.118 ************************************ 00:25:46.118 END TEST ftl_dirty_shutdown 00:25:46.118 ************************************ 00:25:46.118 00:25:46.118 real 3m54.843s 00:25:46.118 user 4m29.201s 00:25:46.118 sys 0m35.004s 00:25:46.118 21:12:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:46.118 21:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:46.379 21:12:07 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:46.379 21:12:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:46.379 21:12:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:46.379 21:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:46.379 ************************************ 00:25:46.379 START TEST ftl_upgrade_shutdown 00:25:46.379 ************************************ 00:25:46.379 21:12:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:46.379 * Looking for test storage... 00:25:46.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:46.379 21:12:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:46.379 21:12:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:46.379 21:12:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:46.379 21:12:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:46.379 21:12:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:46.379 21:12:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:46.379 21:12:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:46.379 21:12:07 -- scripts/common.sh@335 -- # IFS=.-: 00:25:46.379 21:12:07 -- scripts/common.sh@335 -- # read -ra ver1 00:25:46.379 21:12:07 -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.379 21:12:07 -- scripts/common.sh@336 -- # read -ra ver2 00:25:46.379 21:12:07 -- scripts/common.sh@337 -- # local 'op=<' 00:25:46.379 21:12:07 -- scripts/common.sh@339 -- # ver1_l=2 00:25:46.379 21:12:07 -- scripts/common.sh@340 -- # ver2_l=1 00:25:46.379 21:12:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:46.379 21:12:07 -- scripts/common.sh@343 -- # case "$op" in 00:25:46.379 21:12:07 -- scripts/common.sh@344 -- # : 1 00:25:46.379 21:12:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:46.379 21:12:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.379 21:12:07 -- scripts/common.sh@364 -- # decimal 1 00:25:46.379 21:12:07 -- scripts/common.sh@352 -- # local d=1 00:25:46.379 21:12:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.379 21:12:07 -- scripts/common.sh@354 -- # echo 1 00:25:46.379 21:12:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:46.379 21:12:07 -- scripts/common.sh@365 -- # decimal 2 00:25:46.379 21:12:07 -- scripts/common.sh@352 -- # local d=2 00:25:46.379 21:12:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.379 21:12:07 -- scripts/common.sh@354 -- # echo 2 00:25:46.379 21:12:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:46.379 21:12:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:46.379 21:12:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:46.379 21:12:07 -- scripts/common.sh@367 -- # return 0 00:25:46.379 21:12:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.379 21:12:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:46.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.379 --rc genhtml_branch_coverage=1 00:25:46.379 --rc genhtml_function_coverage=1 00:25:46.379 --rc genhtml_legend=1 00:25:46.379 --rc geninfo_all_blocks=1 00:25:46.379 --rc geninfo_unexecuted_blocks=1 00:25:46.379 00:25:46.379 ' 00:25:46.379 21:12:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:46.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.379 --rc genhtml_branch_coverage=1 00:25:46.379 --rc genhtml_function_coverage=1 00:25:46.379 --rc genhtml_legend=1 00:25:46.379 --rc geninfo_all_blocks=1 00:25:46.379 --rc geninfo_unexecuted_blocks=1 00:25:46.379 00:25:46.379 ' 00:25:46.379 21:12:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:46.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.379 --rc genhtml_branch_coverage=1 00:25:46.379 --rc genhtml_function_coverage=1 00:25:46.379 --rc genhtml_legend=1 00:25:46.379 --rc geninfo_all_blocks=1 00:25:46.379 --rc geninfo_unexecuted_blocks=1 00:25:46.379 00:25:46.379 ' 00:25:46.379 21:12:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:46.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.379 --rc genhtml_branch_coverage=1 00:25:46.379 --rc genhtml_function_coverage=1 00:25:46.379 --rc genhtml_legend=1 00:25:46.379 --rc geninfo_all_blocks=1 00:25:46.379 --rc geninfo_unexecuted_blocks=1 00:25:46.379 00:25:46.379 ' 00:25:46.379 21:12:07 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:46.379 21:12:07 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:46.379 21:12:07 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:46.379 21:12:07 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:46.379 21:12:07 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:46.379 21:12:07 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:46.379 21:12:07 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:46.379 21:12:07 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:46.379 21:12:07 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:46.379 21:12:07 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:46.379 21:12:07 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:46.379 21:12:07 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:46.379 21:12:07 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:46.379 21:12:07 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:46.379 21:12:07 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:46.379 21:12:07 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:46.379 21:12:07 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:46.379 21:12:07 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:46.379 21:12:07 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:46.380 21:12:07 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:46.380 21:12:07 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:46.380 21:12:07 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:46.380 21:12:07 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:46.380 21:12:07 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:46.380 21:12:07 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:46.380 21:12:07 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:46.380 21:12:07 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:46.380 21:12:07 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:46.380 21:12:07 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:46.380 21:12:07 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:46.380 21:12:07 -- ftl/common.sh@81 -- # local base_bdev= 00:25:46.380 21:12:07 -- ftl/common.sh@82 -- # local cache_bdev= 00:25:46.380 21:12:07 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:46.380 21:12:07 -- ftl/common.sh@89 -- # spdk_tgt_pid=78195 00:25:46.380 21:12:07 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:46.380 21:12:07 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:46.380 21:12:07 -- ftl/common.sh@91 -- # waitforlisten 78195 00:25:46.380 21:12:07 -- common/autotest_common.sh@829 -- # '[' -z 78195 ']' 00:25:46.380 21:12:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.380 21:12:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.380 21:12:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.380 21:12:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.380 21:12:07 -- common/autotest_common.sh@10 -- # set +x 00:25:46.639 [2024-12-08 21:12:07.526463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:46.639 [2024-12-08 21:12:07.526627] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78195 ] 00:25:46.897 [2024-12-08 21:12:07.694348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.897 [2024-12-08 21:12:07.868050] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:46.897 [2024-12-08 21:12:07.868284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.465 21:12:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.465 21:12:08 -- common/autotest_common.sh@862 -- # return 0 00:25:47.465 21:12:08 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:47.465 21:12:08 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:47.465 21:12:08 -- ftl/common.sh@99 -- # local params 00:25:47.465 21:12:08 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:47.465 21:12:08 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:47.465 21:12:08 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:47.465 21:12:08 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:25:47.465 21:12:08 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:47.465 21:12:08 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:47.465 21:12:08 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:47.465 21:12:08 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:25:47.465 21:12:08 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:47.465 21:12:08 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:47.465 21:12:08 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:47.465 21:12:08 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:47.465 21:12:08 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:25:47.465 21:12:08 -- ftl/common.sh@54 -- # local name=base 00:25:47.465 21:12:08 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:25:47.465 21:12:08 -- ftl/common.sh@56 -- # local size=20480 00:25:47.465 21:12:08 -- ftl/common.sh@59 -- # local base_bdev 00:25:47.465 21:12:08 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:25:48.031 21:12:08 -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:48.031 21:12:08 -- ftl/common.sh@62 -- # local base_size 00:25:48.031 21:12:08 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:48.031 21:12:08 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:25:48.031 21:12:08 -- common/autotest_common.sh@1368 -- # local bdev_info 00:25:48.031 21:12:08 -- common/autotest_common.sh@1369 -- # local bs 00:25:48.032 21:12:08 -- common/autotest_common.sh@1370 -- # local nb 00:25:48.032 21:12:08 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:48.289 21:12:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:25:48.289 { 00:25:48.289 "name": "basen1", 00:25:48.289 "aliases": [ 00:25:48.289 "7a8b138d-8a3e-482c-b120-50888846b365" 00:25:48.289 ], 00:25:48.289 "product_name": "NVMe disk", 00:25:48.289 "block_size": 4096, 00:25:48.289 "num_blocks": 1310720, 00:25:48.289 "uuid": "7a8b138d-8a3e-482c-b120-50888846b365", 00:25:48.289 "assigned_rate_limits": { 00:25:48.289 "rw_ios_per_sec": 0, 00:25:48.289 "rw_mbytes_per_sec": 0, 00:25:48.289 "r_mbytes_per_sec": 0, 00:25:48.289 "w_mbytes_per_sec": 0 00:25:48.289 }, 00:25:48.289 "claimed": true, 00:25:48.289 "claim_type": "read_many_write_one", 00:25:48.289 "zoned": false, 00:25:48.289 "supported_io_types": { 00:25:48.289 "read": true, 00:25:48.289 "write": true, 00:25:48.289 "unmap": true, 00:25:48.289 "write_zeroes": true, 00:25:48.289 "flush": true, 00:25:48.289 "reset": true, 00:25:48.289 "compare": true, 00:25:48.289 "compare_and_write": false, 00:25:48.289 "abort": true, 00:25:48.289 "nvme_admin": true, 00:25:48.289 "nvme_io": true 00:25:48.289 }, 00:25:48.289 "driver_specific": { 00:25:48.289 "nvme": [ 00:25:48.289 { 00:25:48.289 "pci_address": "0000:00:07.0", 00:25:48.289 "trid": { 00:25:48.289 "trtype": "PCIe", 00:25:48.289 "traddr": "0000:00:07.0" 00:25:48.289 }, 00:25:48.289 "ctrlr_data": { 00:25:48.289 "cntlid": 0, 00:25:48.289 "vendor_id": "0x1b36", 00:25:48.289 "model_number": "QEMU NVMe Ctrl", 00:25:48.289 "serial_number": "12341", 00:25:48.289 "firmware_revision": "8.0.0", 00:25:48.289 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:48.289 "oacs": { 00:25:48.289 "security": 0, 00:25:48.289 "format": 1, 00:25:48.289 "firmware": 0, 00:25:48.289 "ns_manage": 1 00:25:48.289 }, 00:25:48.289 "multi_ctrlr": false, 00:25:48.289 "ana_reporting": false 00:25:48.289 }, 00:25:48.289 "vs": { 00:25:48.289 "nvme_version": "1.4" 00:25:48.289 }, 00:25:48.289 "ns_data": { 00:25:48.289 "id": 1, 00:25:48.289 "can_share": false 00:25:48.289 } 00:25:48.289 } 00:25:48.289 ], 00:25:48.289 "mp_policy": "active_passive" 00:25:48.289 } 00:25:48.289 } 00:25:48.289 ]' 00:25:48.289 21:12:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:25:48.289 21:12:09 -- common/autotest_common.sh@1372 -- # bs=4096 00:25:48.289 21:12:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:25:48.289 21:12:09 -- common/autotest_common.sh@1373 -- # nb=1310720 00:25:48.289 21:12:09 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:25:48.289 21:12:09 -- common/autotest_common.sh@1377 -- # echo 5120 00:25:48.289 21:12:09 -- ftl/common.sh@63 -- # base_size=5120 00:25:48.289 21:12:09 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:48.289 21:12:09 -- ftl/common.sh@67 -- # clear_lvols 00:25:48.289 21:12:09 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:48.289 21:12:09 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:48.546 21:12:09 -- ftl/common.sh@28 -- # stores=f626c195-8bfd-4aa9-a4a6-5a600f9a637d 00:25:48.546 21:12:09 -- ftl/common.sh@29 -- # for lvs in $stores 00:25:48.546 21:12:09 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f626c195-8bfd-4aa9-a4a6-5a600f9a637d 00:25:48.804 21:12:09 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:49.061 21:12:09 -- ftl/common.sh@68 -- # lvs=3393f6e2-80d8-4bb0-b879-5e8776fa17b2 00:25:49.061 21:12:09 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 3393f6e2-80d8-4bb0-b879-5e8776fa17b2 00:25:49.318 21:12:10 -- ftl/common.sh@107 -- # base_bdev=1560463f-6dbc-4eb6-8f3b-46685d6c2dad 00:25:49.318 21:12:10 -- ftl/common.sh@108 -- # [[ -z 1560463f-6dbc-4eb6-8f3b-46685d6c2dad ]] 00:25:49.318 21:12:10 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 1560463f-6dbc-4eb6-8f3b-46685d6c2dad 5120 00:25:49.318 21:12:10 -- ftl/common.sh@35 -- # local name=cache 00:25:49.318 21:12:10 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:25:49.318 21:12:10 -- ftl/common.sh@37 -- # local base_bdev=1560463f-6dbc-4eb6-8f3b-46685d6c2dad 00:25:49.318 21:12:10 -- ftl/common.sh@38 -- # local cache_size=5120 00:25:49.318 21:12:10 -- ftl/common.sh@41 -- # get_bdev_size 1560463f-6dbc-4eb6-8f3b-46685d6c2dad 00:25:49.318 21:12:10 -- common/autotest_common.sh@1367 -- # local bdev_name=1560463f-6dbc-4eb6-8f3b-46685d6c2dad 00:25:49.318 21:12:10 -- common/autotest_common.sh@1368 -- # local bdev_info 00:25:49.318 21:12:10 -- common/autotest_common.sh@1369 -- # local bs 00:25:49.318 21:12:10 -- common/autotest_common.sh@1370 -- # local nb 00:25:49.318 21:12:10 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1560463f-6dbc-4eb6-8f3b-46685d6c2dad 00:25:49.318 21:12:10 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:25:49.318 { 00:25:49.318 "name": "1560463f-6dbc-4eb6-8f3b-46685d6c2dad", 00:25:49.318 "aliases": [ 00:25:49.318 "lvs/basen1p0" 00:25:49.318 ], 00:25:49.318 "product_name": "Logical Volume", 00:25:49.318 "block_size": 4096, 00:25:49.318 "num_blocks": 5242880, 00:25:49.318 "uuid": "1560463f-6dbc-4eb6-8f3b-46685d6c2dad", 00:25:49.318 "assigned_rate_limits": { 00:25:49.318 "rw_ios_per_sec": 0, 00:25:49.318 "rw_mbytes_per_sec": 0, 00:25:49.318 "r_mbytes_per_sec": 0, 00:25:49.318 "w_mbytes_per_sec": 0 00:25:49.318 }, 00:25:49.318 "claimed": false, 00:25:49.318 "zoned": false, 00:25:49.318 "supported_io_types": { 00:25:49.318 "read": true, 00:25:49.318 "write": true, 00:25:49.318 "unmap": true, 00:25:49.318 "write_zeroes": true, 00:25:49.318 "flush": false, 00:25:49.318 "reset": true, 00:25:49.318 "compare": false, 00:25:49.318 "compare_and_write": false, 00:25:49.318 "abort": false, 00:25:49.318 "nvme_admin": false, 00:25:49.318 "nvme_io": false 00:25:49.318 }, 00:25:49.318 "driver_specific": { 00:25:49.318 "lvol": { 00:25:49.318 "lvol_store_uuid": "3393f6e2-80d8-4bb0-b879-5e8776fa17b2", 00:25:49.318 "base_bdev": "basen1", 00:25:49.318 "thin_provision": true, 00:25:49.318 "snapshot": false, 00:25:49.318 "clone": false, 00:25:49.318 "esnap_clone": false 00:25:49.318 } 00:25:49.318 } 00:25:49.318 } 00:25:49.318 ]' 00:25:49.318 21:12:10 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:25:49.576 21:12:10 -- common/autotest_common.sh@1372 -- # bs=4096 00:25:49.576 21:12:10 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:25:49.576 21:12:10 -- common/autotest_common.sh@1373 -- # nb=5242880 00:25:49.576 21:12:10 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:25:49.576 21:12:10 -- common/autotest_common.sh@1377 -- # echo 20480 00:25:49.576 21:12:10 -- ftl/common.sh@41 -- # local base_size=1024 00:25:49.576 21:12:10 -- ftl/common.sh@44 -- # local nvc_bdev 00:25:49.576 21:12:10 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:25:49.834 21:12:10 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:49.834 21:12:10 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:49.834 21:12:10 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:50.092 21:12:10 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:50.092 21:12:10 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:50.092 21:12:10 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 1560463f-6dbc-4eb6-8f3b-46685d6c2dad -c cachen1p0 --l2p_dram_limit 2 00:25:50.351 [2024-12-08 21:12:11.199624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.199688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:50.351 [2024-12-08 21:12:11.199709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:50.351 [2024-12-08 21:12:11.199722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.199797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.199814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:50.351 [2024-12-08 21:12:11.199859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:25:50.351 [2024-12-08 21:12:11.199886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.199917] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:50.351 [2024-12-08 21:12:11.200947] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:50.351 [2024-12-08 21:12:11.200989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.201003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:50.351 [2024-12-08 21:12:11.201018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.076 ms 00:25:50.351 [2024-12-08 21:12:11.201029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.201173] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b5946c2b-86c0-48f0-ab7d-7b4bbeb00991 00:25:50.351 [2024-12-08 21:12:11.202261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.202317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:50.351 [2024-12-08 21:12:11.202332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:25:50.351 [2024-12-08 21:12:11.202344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.206626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.206682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:50.351 [2024-12-08 21:12:11.206713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.231 ms 00:25:50.351 [2024-12-08 21:12:11.206725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.206781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.206799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:50.351 [2024-12-08 21:12:11.206811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:50.351 [2024-12-08 21:12:11.206824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.206889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.206912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:50.351 [2024-12-08 21:12:11.206955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:50.351 [2024-12-08 21:12:11.206984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.207018] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:50.351 [2024-12-08 21:12:11.211226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.211264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:50.351 [2024-12-08 21:12:11.211280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.215 ms 00:25:50.351 [2024-12-08 21:12:11.211291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.211329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.211344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:50.351 [2024-12-08 21:12:11.211358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:50.351 [2024-12-08 21:12:11.211368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.211421] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:50.351 [2024-12-08 21:12:11.211594] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:25:50.351 [2024-12-08 21:12:11.211616] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:50.351 [2024-12-08 21:12:11.211630] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:25:50.351 [2024-12-08 21:12:11.211646] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:50.351 [2024-12-08 21:12:11.211659] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:50.351 [2024-12-08 21:12:11.211676] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:50.351 [2024-12-08 21:12:11.211687] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:50.351 [2024-12-08 21:12:11.211701] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:25:50.351 [2024-12-08 21:12:11.211712] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:25:50.351 [2024-12-08 21:12:11.211725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.211747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:50.351 [2024-12-08 21:12:11.211762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.307 ms 00:25:50.351 [2024-12-08 21:12:11.211773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.211846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.351 [2024-12-08 21:12:11.211860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:50.351 [2024-12-08 21:12:11.211873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:25:50.351 [2024-12-08 21:12:11.211887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.351 [2024-12-08 21:12:11.211971] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:50.351 [2024-12-08 21:12:11.211996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:50.351 [2024-12-08 21:12:11.212012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:50.351 [2024-12-08 21:12:11.212023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:50.351 [2024-12-08 21:12:11.212036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:50.351 [2024-12-08 21:12:11.212046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:50.351 [2024-12-08 21:12:11.212059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:50.351 [2024-12-08 21:12:11.212069] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:50.351 [2024-12-08 21:12:11.212127] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:50.351 [2024-12-08 21:12:11.212156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:50.351 [2024-12-08 21:12:11.212170] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:50.352 [2024-12-08 21:12:11.212181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:50.352 [2024-12-08 21:12:11.212196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:50.352 [2024-12-08 21:12:11.212207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:50.352 [2024-12-08 21:12:11.212219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:25:50.352 [2024-12-08 21:12:11.212230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:50.352 [2024-12-08 21:12:11.212244] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:50.352 [2024-12-08 21:12:11.212255] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:25:50.352 [2024-12-08 21:12:11.212268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:50.352 [2024-12-08 21:12:11.212278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:25:50.352 [2024-12-08 21:12:11.212290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:25:50.352 [2024-12-08 21:12:11.212301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:25:50.352 [2024-12-08 21:12:11.212313] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:50.352 [2024-12-08 21:12:11.212324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:50.352 [2024-12-08 21:12:11.212336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:50.352 [2024-12-08 21:12:11.212347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:50.352 [2024-12-08 21:12:11.212359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:25:50.352 [2024-12-08 21:12:11.212369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:50.352 [2024-12-08 21:12:11.212382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:50.352 [2024-12-08 21:12:11.212392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:50.352 [2024-12-08 21:12:11.212404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:50.352 [2024-12-08 21:12:11.212415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:50.352 [2024-12-08 21:12:11.212429] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:25:50.352 [2024-12-08 21:12:11.212455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:50.352 [2024-12-08 21:12:11.212467] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:50.352 [2024-12-08 21:12:11.212477] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:50.352 [2024-12-08 21:12:11.212503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:50.352 [2024-12-08 21:12:11.212514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:50.352 [2024-12-08 21:12:11.212529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:25:50.352 [2024-12-08 21:12:11.212540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:50.352 [2024-12-08 21:12:11.212551] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:50.352 [2024-12-08 21:12:11.212562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:50.352 [2024-12-08 21:12:11.212574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:50.352 [2024-12-08 21:12:11.212585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:50.352 [2024-12-08 21:12:11.212600] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:50.352 [2024-12-08 21:12:11.212611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:50.352 [2024-12-08 21:12:11.212623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:50.352 [2024-12-08 21:12:11.212634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:50.352 [2024-12-08 21:12:11.212647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:50.352 [2024-12-08 21:12:11.212657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:50.352 [2024-12-08 21:12:11.212670] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:50.352 [2024-12-08 21:12:11.212685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:50.352 [2024-12-08 21:12:11.212698] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:50.352 [2024-12-08 21:12:11.212709] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:25:50.352 [2024-12-08 21:12:11.212722] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:25:50.352 [2024-12-08 21:12:11.212733] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:25:50.352 [2024-12-08 21:12:11.212745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:25:50.352 [2024-12-08 21:12:11.212756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:25:50.352 [2024-12-08 21:12:11.212768] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:25:50.352 [2024-12-08 21:12:11.212779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:25:50.352 [2024-12-08 21:12:11.212793] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:25:50.352 [2024-12-08 21:12:11.212803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:25:50.352 [2024-12-08 21:12:11.212817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:25:50.352 [2024-12-08 21:12:11.212828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:25:50.352 [2024-12-08 21:12:11.212845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:25:50.352 [2024-12-08 21:12:11.212856] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:50.352 [2024-12-08 21:12:11.212870] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:50.352 [2024-12-08 21:12:11.212882] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:50.352 [2024-12-08 21:12:11.212894] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:50.352 [2024-12-08 21:12:11.212905] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:50.352 [2024-12-08 21:12:11.212919] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:50.352 [2024-12-08 21:12:11.212931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.352 [2024-12-08 21:12:11.212944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:50.352 [2024-12-08 21:12:11.212955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.008 ms 00:25:50.352 [2024-12-08 21:12:11.212968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.352 [2024-12-08 21:12:11.229072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.352 [2024-12-08 21:12:11.229159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:50.352 [2024-12-08 21:12:11.229176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.051 ms 00:25:50.352 [2024-12-08 21:12:11.229189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.352 [2024-12-08 21:12:11.229236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.352 [2024-12-08 21:12:11.229257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:50.352 [2024-12-08 21:12:11.229269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:50.352 [2024-12-08 21:12:11.229280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.352 [2024-12-08 21:12:11.260152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.352 [2024-12-08 21:12:11.260215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:50.352 [2024-12-08 21:12:11.260249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.815 ms 00:25:50.352 [2024-12-08 21:12:11.260264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.352 [2024-12-08 21:12:11.260308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.352 [2024-12-08 21:12:11.260325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:50.352 [2024-12-08 21:12:11.260337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:50.352 [2024-12-08 21:12:11.260351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.352 [2024-12-08 21:12:11.260752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.352 [2024-12-08 21:12:11.260788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:50.352 [2024-12-08 21:12:11.260802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:25:50.352 [2024-12-08 21:12:11.260815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.352 [2024-12-08 21:12:11.260873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.352 [2024-12-08 21:12:11.260894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:50.352 [2024-12-08 21:12:11.260906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:25:50.352 [2024-12-08 21:12:11.260918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.352 [2024-12-08 21:12:11.276547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.352 [2024-12-08 21:12:11.276600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:50.352 [2024-12-08 21:12:11.276632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.603 ms 00:25:50.352 [2024-12-08 21:12:11.276646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.352 [2024-12-08 21:12:11.287338] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:50.352 [2024-12-08 21:12:11.288329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.352 [2024-12-08 21:12:11.288362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:50.352 [2024-12-08 21:12:11.288396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.588 ms 00:25:50.352 [2024-12-08 21:12:11.288408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.352 [2024-12-08 21:12:11.312026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.352 [2024-12-08 21:12:11.312111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:50.352 [2024-12-08 21:12:11.312148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.570 ms 00:25:50.352 [2024-12-08 21:12:11.312160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.353 [2024-12-08 21:12:11.312213] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:25:50.353 [2024-12-08 21:12:11.312233] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:25:52.881 [2024-12-08 21:12:13.410142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.410215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:52.881 [2024-12-08 21:12:13.410244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2097.941 ms 00:25:52.881 [2024-12-08 21:12:13.410255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.410362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.410397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:52.881 [2024-12-08 21:12:13.410428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:25:52.881 [2024-12-08 21:12:13.410454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.435327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.435377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:52.881 [2024-12-08 21:12:13.435394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.810 ms 00:25:52.881 [2024-12-08 21:12:13.435405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.460553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.460601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:52.881 [2024-12-08 21:12:13.460635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.098 ms 00:25:52.881 [2024-12-08 21:12:13.460644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.461020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.461047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:52.881 [2024-12-08 21:12:13.461062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:25:52.881 [2024-12-08 21:12:13.461087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.524956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.524989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:52.881 [2024-12-08 21:12:13.525005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 63.805 ms 00:25:52.881 [2024-12-08 21:12:13.525015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.550127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.550158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:52.881 [2024-12-08 21:12:13.550175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.066 ms 00:25:52.881 [2024-12-08 21:12:13.550185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.551949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.551978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:25:52.881 [2024-12-08 21:12:13.551996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.711 ms 00:25:52.881 [2024-12-08 21:12:13.552006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.576683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.576723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:52.881 [2024-12-08 21:12:13.576741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.633 ms 00:25:52.881 [2024-12-08 21:12:13.576750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.576798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.576814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:52.881 [2024-12-08 21:12:13.576826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:52.881 [2024-12-08 21:12:13.576835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.576920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.881 [2024-12-08 21:12:13.576935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:52.881 [2024-12-08 21:12:13.576962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:25:52.881 [2024-12-08 21:12:13.576987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.881 [2024-12-08 21:12:13.578233] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2378.083 ms, result 0 00:25:52.881 { 00:25:52.881 "name": "ftl", 00:25:52.881 "uuid": "b5946c2b-86c0-48f0-ab7d-7b4bbeb00991" 00:25:52.881 } 00:25:52.881 21:12:13 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:52.881 [2024-12-08 21:12:13.785208] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.881 21:12:13 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:53.141 21:12:14 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:53.400 [2024-12-08 21:12:14.281666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:25:53.400 21:12:14 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:53.658 [2024-12-08 21:12:14.474027] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:53.658 21:12:14 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:53.926 Fill FTL, iteration 1 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:53.926 21:12:14 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:53.926 21:12:14 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:53.926 21:12:14 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:53.926 21:12:14 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:53.926 21:12:14 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:53.926 21:12:14 -- ftl/common.sh@163 -- # spdk_ini_pid=78306 00:25:53.926 21:12:14 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:53.926 21:12:14 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:53.926 21:12:14 -- ftl/common.sh@165 -- # waitforlisten 78306 /var/tmp/spdk.tgt.sock 00:25:53.926 21:12:14 -- common/autotest_common.sh@829 -- # '[' -z 78306 ']' 00:25:53.926 21:12:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:53.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:53.926 21:12:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.926 21:12:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:53.926 21:12:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.926 21:12:14 -- common/autotest_common.sh@10 -- # set +x 00:25:53.926 [2024-12-08 21:12:14.902765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:53.926 [2024-12-08 21:12:14.902876] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78306 ] 00:25:54.185 [2024-12-08 21:12:15.060639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.443 [2024-12-08 21:12:15.278799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:54.443 [2024-12-08 21:12:15.279019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.822 21:12:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.822 21:12:16 -- common/autotest_common.sh@862 -- # return 0 00:25:55.822 21:12:16 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:55.822 ftln1 00:25:55.822 21:12:16 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:55.822 21:12:16 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:56.082 21:12:16 -- ftl/common.sh@173 -- # echo ']}' 00:25:56.082 21:12:16 -- ftl/common.sh@176 -- # killprocess 78306 00:25:56.082 21:12:16 -- common/autotest_common.sh@936 -- # '[' -z 78306 ']' 00:25:56.082 21:12:16 -- common/autotest_common.sh@940 -- # kill -0 78306 00:25:56.082 21:12:16 -- common/autotest_common.sh@941 -- # uname 00:25:56.082 21:12:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:56.082 21:12:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78306 00:25:56.082 killing process with pid 78306 00:25:56.082 21:12:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:56.082 21:12:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:56.082 21:12:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78306' 00:25:56.082 21:12:17 -- common/autotest_common.sh@955 -- # kill 78306 00:25:56.082 21:12:17 -- common/autotest_common.sh@960 -- # wait 78306 00:25:57.986 21:12:18 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:57.986 21:12:18 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:57.986 [2024-12-08 21:12:18.832383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:57.986 [2024-12-08 21:12:18.832556] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78361 ] 00:25:57.986 [2024-12-08 21:12:18.986323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.245 [2024-12-08 21:12:19.135752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.624  [2024-12-08T21:12:21.604Z] Copying: 214/1024 [MB] (214 MBps) [2024-12-08T21:12:22.542Z] Copying: 433/1024 [MB] (219 MBps) [2024-12-08T21:12:23.922Z] Copying: 653/1024 [MB] (220 MBps) [2024-12-08T21:12:24.181Z] Copying: 871/1024 [MB] (218 MBps) [2024-12-08T21:12:25.120Z] Copying: 1024/1024 [MB] (average 218 MBps) 00:26:04.077 00:26:04.077 21:12:25 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:04.077 Calculate MD5 checksum, iteration 1 00:26:04.077 21:12:25 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:04.077 21:12:25 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:04.077 21:12:25 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:04.077 21:12:25 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:04.077 21:12:25 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:04.077 21:12:25 -- ftl/common.sh@154 -- # return 0 00:26:04.077 21:12:25 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:04.336 [2024-12-08 21:12:25.196350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:04.336 [2024-12-08 21:12:25.196515] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78425 ] 00:26:04.336 [2024-12-08 21:12:25.362677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.595 [2024-12-08 21:12:25.509723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.973  [2024-12-08T21:12:27.953Z] Copying: 468/1024 [MB] (468 MBps) [2024-12-08T21:12:28.213Z] Copying: 936/1024 [MB] (468 MBps) [2024-12-08T21:12:29.148Z] Copying: 1024/1024 [MB] (average 467 MBps) 00:26:08.105 00:26:08.105 21:12:28 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:08.105 21:12:28 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:10.007 21:12:30 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:10.007 Fill FTL, iteration 2 00:26:10.008 21:12:30 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=eee211599f501ed100665ba7258e5300 00:26:10.008 21:12:30 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:10.008 21:12:30 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:10.008 21:12:30 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:10.008 21:12:30 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:10.008 21:12:30 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:10.008 21:12:30 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:10.008 21:12:30 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:10.008 21:12:30 -- ftl/common.sh@154 -- # return 0 00:26:10.008 21:12:30 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:10.008 [2024-12-08 21:12:30.884994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:10.008 [2024-12-08 21:12:30.885215] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78491 ] 00:26:10.267 [2024-12-08 21:12:31.057953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.267 [2024-12-08 21:12:31.273257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.647  [2024-12-08T21:12:33.629Z] Copying: 222/1024 [MB] (222 MBps) [2024-12-08T21:12:35.010Z] Copying: 440/1024 [MB] (218 MBps) [2024-12-08T21:12:35.946Z] Copying: 656/1024 [MB] (216 MBps) [2024-12-08T21:12:36.514Z] Copying: 878/1024 [MB] (222 MBps) [2024-12-08T21:12:37.472Z] Copying: 1024/1024 [MB] (average 218 MBps) 00:26:16.429 00:26:16.429 21:12:37 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:16.429 21:12:37 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:16.429 Calculate MD5 checksum, iteration 2 00:26:16.429 21:12:37 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:16.429 21:12:37 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:16.429 21:12:37 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:16.429 21:12:37 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:16.429 21:12:37 -- ftl/common.sh@154 -- # return 0 00:26:16.429 21:12:37 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:16.429 [2024-12-08 21:12:37.333883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:16.429 [2024-12-08 21:12:37.334042] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78556 ] 00:26:16.732 [2024-12-08 21:12:37.502481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.732 [2024-12-08 21:12:37.653296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.119  [2024-12-08T21:12:40.540Z] Copying: 478/1024 [MB] (478 MBps) [2024-12-08T21:12:40.540Z] Copying: 955/1024 [MB] (477 MBps) [2024-12-08T21:12:41.917Z] Copying: 1024/1024 [MB] (average 476 MBps) 00:26:20.875 00:26:20.875 21:12:41 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:20.875 21:12:41 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:22.777 21:12:43 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:22.777 21:12:43 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=895def0507ff5cd6d4ac5183f1728c12 00:26:22.777 21:12:43 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:22.777 21:12:43 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:22.777 21:12:43 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:22.777 [2024-12-08 21:12:43.649197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.777 [2024-12-08 21:12:43.649248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:22.777 [2024-12-08 21:12:43.649267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:22.777 [2024-12-08 21:12:43.649283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.777 [2024-12-08 21:12:43.649317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.777 [2024-12-08 21:12:43.649332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:22.777 [2024-12-08 21:12:43.649343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:22.777 [2024-12-08 21:12:43.649353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.777 [2024-12-08 21:12:43.649377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.777 [2024-12-08 21:12:43.649390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:22.777 [2024-12-08 21:12:43.649412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:22.777 [2024-12-08 21:12:43.649422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.777 [2024-12-08 21:12:43.649524] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.310 ms, result 0 00:26:22.777 true 00:26:22.777 21:12:43 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:23.034 { 00:26:23.034 "name": "ftl", 00:26:23.034 "properties": [ 00:26:23.034 { 00:26:23.034 "name": "superblock_version", 00:26:23.034 "value": 5, 00:26:23.034 "read-only": true 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "name": "base_device", 00:26:23.034 "bands": [ 00:26:23.034 { 00:26:23.034 "id": 0, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 1, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 2, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 3, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 4, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 5, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 6, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 7, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 8, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 9, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 10, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 11, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 12, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 13, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 14, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 15, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 16, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 17, 00:26:23.034 "state": "FREE", 00:26:23.034 "validity": 0.0 00:26:23.034 } 00:26:23.034 ], 00:26:23.034 "read-only": true 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "name": "cache_device", 00:26:23.034 "type": "bdev", 00:26:23.034 "chunks": [ 00:26:23.034 { 00:26:23.034 "id": 0, 00:26:23.034 "state": "CLOSED", 00:26:23.034 "utilization": 1.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.034 "id": 1, 00:26:23.034 "state": "CLOSED", 00:26:23.034 "utilization": 1.0 00:26:23.034 }, 00:26:23.034 { 00:26:23.035 "id": 2, 00:26:23.035 "state": "OPEN", 00:26:23.035 "utilization": 0.001953125 00:26:23.035 }, 00:26:23.035 { 00:26:23.035 "id": 3, 00:26:23.035 "state": "OPEN", 00:26:23.035 "utilization": 0.0 00:26:23.035 } 00:26:23.035 ], 00:26:23.035 "read-only": true 00:26:23.035 }, 00:26:23.035 { 00:26:23.035 "name": "verbose_mode", 00:26:23.035 "value": true, 00:26:23.035 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:23.035 }, 00:26:23.035 { 00:26:23.035 "name": "prep_upgrade_on_shutdown", 00:26:23.035 "value": false, 00:26:23.035 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:23.035 } 00:26:23.035 ] 00:26:23.035 } 00:26:23.035 21:12:43 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:23.292 [2024-12-08 21:12:44.119858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.292 [2024-12-08 21:12:44.119902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:23.293 [2024-12-08 21:12:44.119918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:23.293 [2024-12-08 21:12:44.119927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.293 [2024-12-08 21:12:44.119955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.293 [2024-12-08 21:12:44.119969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:23.293 [2024-12-08 21:12:44.119979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:23.293 [2024-12-08 21:12:44.119987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.293 [2024-12-08 21:12:44.120009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.293 [2024-12-08 21:12:44.120019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:23.293 [2024-12-08 21:12:44.120029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:23.293 [2024-12-08 21:12:44.120037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.293 [2024-12-08 21:12:44.120156] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.228 ms, result 0 00:26:23.293 true 00:26:23.293 21:12:44 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:23.293 21:12:44 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:23.293 21:12:44 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:23.550 21:12:44 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:23.550 21:12:44 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:23.550 21:12:44 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:23.550 [2024-12-08 21:12:44.532362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.550 [2024-12-08 21:12:44.532607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:23.550 [2024-12-08 21:12:44.532737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:23.550 [2024-12-08 21:12:44.532782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.550 [2024-12-08 21:12:44.532912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.550 [2024-12-08 21:12:44.532972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:23.550 [2024-12-08 21:12:44.533014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:23.550 [2024-12-08 21:12:44.533180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.550 [2024-12-08 21:12:44.533316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.550 [2024-12-08 21:12:44.533373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:23.550 [2024-12-08 21:12:44.533556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:23.550 [2024-12-08 21:12:44.533603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.550 [2024-12-08 21:12:44.533707] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.325 ms, result 0 00:26:23.550 true 00:26:23.550 21:12:44 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:23.809 { 00:26:23.809 "name": "ftl", 00:26:23.809 "properties": [ 00:26:23.809 { 00:26:23.809 "name": "superblock_version", 00:26:23.809 "value": 5, 00:26:23.809 "read-only": true 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "name": "base_device", 00:26:23.809 "bands": [ 00:26:23.809 { 00:26:23.809 "id": 0, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 1, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 2, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 3, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 4, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 5, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 6, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 7, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 8, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 9, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 10, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 11, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 12, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 13, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 14, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 15, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 16, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 17, 00:26:23.809 "state": "FREE", 00:26:23.809 "validity": 0.0 00:26:23.809 } 00:26:23.809 ], 00:26:23.809 "read-only": true 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "name": "cache_device", 00:26:23.809 "type": "bdev", 00:26:23.809 "chunks": [ 00:26:23.809 { 00:26:23.809 "id": 0, 00:26:23.809 "state": "CLOSED", 00:26:23.809 "utilization": 1.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 1, 00:26:23.809 "state": "CLOSED", 00:26:23.809 "utilization": 1.0 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 2, 00:26:23.809 "state": "OPEN", 00:26:23.809 "utilization": 0.001953125 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "id": 3, 00:26:23.809 "state": "OPEN", 00:26:23.809 "utilization": 0.0 00:26:23.809 } 00:26:23.809 ], 00:26:23.809 "read-only": true 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "name": "verbose_mode", 00:26:23.809 "value": true, 00:26:23.809 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:23.809 }, 00:26:23.809 { 00:26:23.809 "name": "prep_upgrade_on_shutdown", 00:26:23.809 "value": true, 00:26:23.809 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:23.810 } 00:26:23.810 ] 00:26:23.810 } 00:26:23.810 21:12:44 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:23.810 21:12:44 -- ftl/common.sh@130 -- # [[ -n 78195 ]] 00:26:23.810 21:12:44 -- ftl/common.sh@131 -- # killprocess 78195 00:26:23.810 21:12:44 -- common/autotest_common.sh@936 -- # '[' -z 78195 ']' 00:26:23.810 21:12:44 -- common/autotest_common.sh@940 -- # kill -0 78195 00:26:23.810 21:12:44 -- common/autotest_common.sh@941 -- # uname 00:26:23.810 21:12:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:23.810 21:12:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78195 00:26:24.068 killing process with pid 78195 00:26:24.068 21:12:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:24.068 21:12:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:24.068 21:12:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78195' 00:26:24.068 21:12:44 -- common/autotest_common.sh@955 -- # kill 78195 00:26:24.068 21:12:44 -- common/autotest_common.sh@960 -- # wait 78195 00:26:24.633 [2024-12-08 21:12:45.570925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:24.633 [2024-12-08 21:12:45.583466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.633 [2024-12-08 21:12:45.583507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:24.633 [2024-12-08 21:12:45.583530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:24.633 [2024-12-08 21:12:45.583540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.633 [2024-12-08 21:12:45.583571] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:24.633 [2024-12-08 21:12:45.586320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.633 [2024-12-08 21:12:45.586348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:24.633 [2024-12-08 21:12:45.586360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.731 ms 00:26:24.633 [2024-12-08 21:12:45.586369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.578967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.750 [2024-12-08 21:12:53.579021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:32.750 [2024-12-08 21:12:53.579039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7992.627 ms 00:26:32.750 [2024-12-08 21:12:53.579054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.580224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.750 [2024-12-08 21:12:53.580255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:32.750 [2024-12-08 21:12:53.580269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.135 ms 00:26:32.750 [2024-12-08 21:12:53.580279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.581423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.750 [2024-12-08 21:12:53.581452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:26:32.750 [2024-12-08 21:12:53.581465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.105 ms 00:26:32.750 [2024-12-08 21:12:53.581475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.591746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.750 [2024-12-08 21:12:53.591902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:32.750 [2024-12-08 21:12:53.591942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.218 ms 00:26:32.750 [2024-12-08 21:12:53.591954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.598712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.750 [2024-12-08 21:12:53.598864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:32.750 [2024-12-08 21:12:53.598904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.716 ms 00:26:32.750 [2024-12-08 21:12:53.598915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.599009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.750 [2024-12-08 21:12:53.599027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:32.750 [2024-12-08 21:12:53.599045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:32.750 [2024-12-08 21:12:53.599055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.609153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.750 [2024-12-08 21:12:53.609184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:32.750 [2024-12-08 21:12:53.609196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.061 ms 00:26:32.750 [2024-12-08 21:12:53.609204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.619265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.750 [2024-12-08 21:12:53.619411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:32.750 [2024-12-08 21:12:53.619449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.025 ms 00:26:32.750 [2024-12-08 21:12:53.619459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.629395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.750 [2024-12-08 21:12:53.629428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:32.750 [2024-12-08 21:12:53.629440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.895 ms 00:26:32.750 [2024-12-08 21:12:53.629448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.639467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.750 [2024-12-08 21:12:53.639501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:32.750 [2024-12-08 21:12:53.639530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.958 ms 00:26:32.750 [2024-12-08 21:12:53.639538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.750 [2024-12-08 21:12:53.639572] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:32.750 [2024-12-08 21:12:53.639591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:32.750 [2024-12-08 21:12:53.639603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:32.750 [2024-12-08 21:12:53.639614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:32.750 [2024-12-08 21:12:53.639623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:32.750 [2024-12-08 21:12:53.639633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:32.750 [2024-12-08 21:12:53.639642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:32.750 [2024-12-08 21:12:53.639651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:32.750 [2024-12-08 21:12:53.639660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:32.750 [2024-12-08 21:12:53.639670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:32.750 [2024-12-08 21:12:53.639679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:32.750 [2024-12-08 21:12:53.639689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:32.750 [2024-12-08 21:12:53.639698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:32.751 [2024-12-08 21:12:53.639707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:32.751 [2024-12-08 21:12:53.639717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:32.751 [2024-12-08 21:12:53.639726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:32.751 [2024-12-08 21:12:53.639749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:32.751 [2024-12-08 21:12:53.639759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:32.751 [2024-12-08 21:12:53.639768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:32.751 [2024-12-08 21:12:53.639780] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:32.751 [2024-12-08 21:12:53.639789] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b5946c2b-86c0-48f0-ab7d-7b4bbeb00991 00:26:32.751 [2024-12-08 21:12:53.639812] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:32.751 [2024-12-08 21:12:53.639821] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:32.751 [2024-12-08 21:12:53.639830] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:32.751 [2024-12-08 21:12:53.639840] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:32.751 [2024-12-08 21:12:53.639848] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:32.751 [2024-12-08 21:12:53.639862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:32.751 [2024-12-08 21:12:53.639870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:32.751 [2024-12-08 21:12:53.639878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:32.751 [2024-12-08 21:12:53.639886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:32.751 [2024-12-08 21:12:53.639896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.751 [2024-12-08 21:12:53.639905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:32.751 [2024-12-08 21:12:53.639915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.325 ms 00:26:32.751 [2024-12-08 21:12:53.639924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.751 [2024-12-08 21:12:53.653250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.751 [2024-12-08 21:12:53.653282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:32.751 [2024-12-08 21:12:53.653297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.306 ms 00:26:32.751 [2024-12-08 21:12:53.653311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.751 [2024-12-08 21:12:53.653492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.751 [2024-12-08 21:12:53.653505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:32.751 [2024-12-08 21:12:53.653515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:26:32.751 [2024-12-08 21:12:53.653523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.751 [2024-12-08 21:12:53.697373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:32.751 [2024-12-08 21:12:53.697410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:32.751 [2024-12-08 21:12:53.697430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:32.751 [2024-12-08 21:12:53.697442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.751 [2024-12-08 21:12:53.697476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:32.751 [2024-12-08 21:12:53.697488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:32.751 [2024-12-08 21:12:53.697497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:32.751 [2024-12-08 21:12:53.697505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.751 [2024-12-08 21:12:53.697578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:32.751 [2024-12-08 21:12:53.697593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:32.751 [2024-12-08 21:12:53.697603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:32.751 [2024-12-08 21:12:53.697612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.751 [2024-12-08 21:12:53.697637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:32.751 [2024-12-08 21:12:53.697649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:32.751 [2024-12-08 21:12:53.697657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:32.751 [2024-12-08 21:12:53.697665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.751 [2024-12-08 21:12:53.774145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:32.751 [2024-12-08 21:12:53.774199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:32.751 [2024-12-08 21:12:53.774213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:32.751 [2024-12-08 21:12:53.774228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.010 [2024-12-08 21:12:53.805135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:33.010 [2024-12-08 21:12:53.805169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:33.010 [2024-12-08 21:12:53.805184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:33.010 [2024-12-08 21:12:53.805193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.010 [2024-12-08 21:12:53.805263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:33.010 [2024-12-08 21:12:53.805278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:33.010 [2024-12-08 21:12:53.805288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:33.010 [2024-12-08 21:12:53.805297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.010 [2024-12-08 21:12:53.805347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:33.010 [2024-12-08 21:12:53.805362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:33.010 [2024-12-08 21:12:53.805371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:33.010 [2024-12-08 21:12:53.805379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.010 [2024-12-08 21:12:53.805474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:33.010 [2024-12-08 21:12:53.805489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:33.010 [2024-12-08 21:12:53.805498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:33.010 [2024-12-08 21:12:53.805518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.010 [2024-12-08 21:12:53.805563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:33.010 [2024-12-08 21:12:53.805582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:33.010 [2024-12-08 21:12:53.805592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:33.010 [2024-12-08 21:12:53.805600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.010 [2024-12-08 21:12:53.805639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:33.010 [2024-12-08 21:12:53.805652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:33.010 [2024-12-08 21:12:53.805660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:33.010 [2024-12-08 21:12:53.805669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.010 [2024-12-08 21:12:53.805717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:33.010 [2024-12-08 21:12:53.805732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:33.010 [2024-12-08 21:12:53.805741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:33.010 [2024-12-08 21:12:53.805751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.010 [2024-12-08 21:12:53.805870] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8222.433 ms, result 0 00:26:35.544 21:12:56 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:35.544 21:12:56 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:35.544 21:12:56 -- ftl/common.sh@81 -- # local base_bdev= 00:26:35.544 21:12:56 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:35.544 21:12:56 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:35.544 21:12:56 -- ftl/common.sh@89 -- # spdk_tgt_pid=78767 00:26:35.544 21:12:56 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:35.544 21:12:56 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:35.544 21:12:56 -- ftl/common.sh@91 -- # waitforlisten 78767 00:26:35.544 21:12:56 -- common/autotest_common.sh@829 -- # '[' -z 78767 ']' 00:26:35.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.544 21:12:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.544 21:12:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.544 21:12:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.544 21:12:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.544 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:26:35.544 [2024-12-08 21:12:56.550730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:35.544 [2024-12-08 21:12:56.550863] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78767 ] 00:26:35.804 [2024-12-08 21:12:56.702685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.063 [2024-12-08 21:12:56.846226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:36.063 [2024-12-08 21:12:56.846433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.633 [2024-12-08 21:12:57.498753] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:36.633 [2024-12-08 21:12:57.498817] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:36.633 [2024-12-08 21:12:57.636612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.633 [2024-12-08 21:12:57.636652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:36.633 [2024-12-08 21:12:57.636670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:36.633 [2024-12-08 21:12:57.636679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.633 [2024-12-08 21:12:57.636744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.633 [2024-12-08 21:12:57.636764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:36.633 [2024-12-08 21:12:57.636774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:36.633 [2024-12-08 21:12:57.636783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.633 [2024-12-08 21:12:57.636810] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:36.633 [2024-12-08 21:12:57.637617] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:36.633 [2024-12-08 21:12:57.637646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.633 [2024-12-08 21:12:57.637657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:36.633 [2024-12-08 21:12:57.637668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.841 ms 00:26:36.633 [2024-12-08 21:12:57.637678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.633 [2024-12-08 21:12:57.638943] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:36.633 [2024-12-08 21:12:57.651802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.633 [2024-12-08 21:12:57.651999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:36.633 [2024-12-08 21:12:57.652180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.860 ms 00:26:36.633 [2024-12-08 21:12:57.652205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.633 [2024-12-08 21:12:57.652281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.633 [2024-12-08 21:12:57.652299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:36.633 [2024-12-08 21:12:57.652311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:36.633 [2024-12-08 21:12:57.652322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.633 [2024-12-08 21:12:57.656316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.633 [2024-12-08 21:12:57.656352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:36.633 [2024-12-08 21:12:57.656368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.901 ms 00:26:36.633 [2024-12-08 21:12:57.656383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.633 [2024-12-08 21:12:57.656433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.633 [2024-12-08 21:12:57.656448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:36.633 [2024-12-08 21:12:57.656459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:36.633 [2024-12-08 21:12:57.656469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.633 [2024-12-08 21:12:57.656547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.633 [2024-12-08 21:12:57.656561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:36.633 [2024-12-08 21:12:57.656587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:36.633 [2024-12-08 21:12:57.656595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.633 [2024-12-08 21:12:57.656696] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:36.633 [2024-12-08 21:12:57.660291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.633 [2024-12-08 21:12:57.660323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:36.633 [2024-12-08 21:12:57.660341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.607 ms 00:26:36.633 [2024-12-08 21:12:57.660351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.633 [2024-12-08 21:12:57.660388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.633 [2024-12-08 21:12:57.660403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:36.633 [2024-12-08 21:12:57.660414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:36.633 [2024-12-08 21:12:57.660422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.633 [2024-12-08 21:12:57.660461] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:36.633 [2024-12-08 21:12:57.660484] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:36.633 [2024-12-08 21:12:57.660533] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:36.633 [2024-12-08 21:12:57.660553] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:36.633 [2024-12-08 21:12:57.660615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:36.633 [2024-12-08 21:12:57.660628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:36.633 [2024-12-08 21:12:57.660639] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:36.633 [2024-12-08 21:12:57.660650] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:36.633 [2024-12-08 21:12:57.660661] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:36.633 [2024-12-08 21:12:57.660670] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:36.633 [2024-12-08 21:12:57.660682] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:36.633 [2024-12-08 21:12:57.660690] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:36.633 [2024-12-08 21:12:57.660701] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:36.634 [2024-12-08 21:12:57.660710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.634 [2024-12-08 21:12:57.660719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:36.634 [2024-12-08 21:12:57.660728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:26:36.634 [2024-12-08 21:12:57.660736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.634 [2024-12-08 21:12:57.660808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.634 [2024-12-08 21:12:57.660822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:36.634 [2024-12-08 21:12:57.660832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:36.634 [2024-12-08 21:12:57.660841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.634 [2024-12-08 21:12:57.660916] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:36.634 [2024-12-08 21:12:57.660930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:36.634 [2024-12-08 21:12:57.660940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:36.634 [2024-12-08 21:12:57.660949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.634 [2024-12-08 21:12:57.660958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:36.634 [2024-12-08 21:12:57.660966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:36.634 [2024-12-08 21:12:57.660974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:36.634 [2024-12-08 21:12:57.660982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:36.634 [2024-12-08 21:12:57.660991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:36.634 [2024-12-08 21:12:57.661000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.634 [2024-12-08 21:12:57.661008] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:36.634 [2024-12-08 21:12:57.661016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:36.634 [2024-12-08 21:12:57.661025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.634 [2024-12-08 21:12:57.661033] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:36.634 [2024-12-08 21:12:57.661041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:36.634 [2024-12-08 21:12:57.661049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.634 [2024-12-08 21:12:57.661057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:36.634 [2024-12-08 21:12:57.661065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:36.634 [2024-12-08 21:12:57.661072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.634 [2024-12-08 21:12:57.661080] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:36.634 [2024-12-08 21:12:57.661088] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:36.634 [2024-12-08 21:12:57.661096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:36.634 [2024-12-08 21:12:57.661118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:36.634 [2024-12-08 21:12:57.661129] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:36.634 [2024-12-08 21:12:57.661137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:36.634 [2024-12-08 21:12:57.661145] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:36.634 [2024-12-08 21:12:57.661153] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:36.634 [2024-12-08 21:12:57.661160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:36.634 [2024-12-08 21:12:57.661168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:36.634 [2024-12-08 21:12:57.661176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:36.634 [2024-12-08 21:12:57.661183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:36.634 [2024-12-08 21:12:57.661191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:36.634 [2024-12-08 21:12:57.661199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:36.634 [2024-12-08 21:12:57.661206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:36.634 [2024-12-08 21:12:57.661214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:36.634 [2024-12-08 21:12:57.661222] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:36.634 [2024-12-08 21:12:57.661230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.634 [2024-12-08 21:12:57.661238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:36.634 [2024-12-08 21:12:57.661245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:36.634 [2024-12-08 21:12:57.661253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.634 [2024-12-08 21:12:57.661260] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:36.634 [2024-12-08 21:12:57.661269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:36.634 [2024-12-08 21:12:57.661277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:36.634 [2024-12-08 21:12:57.661286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:36.634 [2024-12-08 21:12:57.661299] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:36.634 [2024-12-08 21:12:57.661308] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:36.634 [2024-12-08 21:12:57.661316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:36.634 [2024-12-08 21:12:57.661324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:36.634 [2024-12-08 21:12:57.661332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:36.634 [2024-12-08 21:12:57.661340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:36.634 [2024-12-08 21:12:57.661349] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:36.634 [2024-12-08 21:12:57.661360] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:36.634 [2024-12-08 21:12:57.661374] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:36.634 [2024-12-08 21:12:57.661383] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:36.634 [2024-12-08 21:12:57.661392] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:36.634 [2024-12-08 21:12:57.661400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:36.634 [2024-12-08 21:12:57.661409] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:36.634 [2024-12-08 21:12:57.661429] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:36.634 [2024-12-08 21:12:57.661438] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:36.634 [2024-12-08 21:12:57.661447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:36.634 [2024-12-08 21:12:57.661455] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:36.634 [2024-12-08 21:12:57.661464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:36.634 [2024-12-08 21:12:57.661472] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:36.634 [2024-12-08 21:12:57.661481] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:36.634 [2024-12-08 21:12:57.661490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:36.634 [2024-12-08 21:12:57.661498] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:36.634 [2024-12-08 21:12:57.661507] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:36.634 [2024-12-08 21:12:57.661517] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:36.634 [2024-12-08 21:12:57.661525] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:36.634 [2024-12-08 21:12:57.661534] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:36.634 [2024-12-08 21:12:57.661542] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:36.634 [2024-12-08 21:12:57.661552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.634 [2024-12-08 21:12:57.661561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:36.634 [2024-12-08 21:12:57.661570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.671 ms 00:26:36.634 [2024-12-08 21:12:57.661578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.894 [2024-12-08 21:12:57.677148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.894 [2024-12-08 21:12:57.677323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:36.894 [2024-12-08 21:12:57.677459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.517 ms 00:26:36.894 [2024-12-08 21:12:57.677506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.894 [2024-12-08 21:12:57.677578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.894 [2024-12-08 21:12:57.677700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:36.894 [2024-12-08 21:12:57.677741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:36.894 [2024-12-08 21:12:57.677773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.894 [2024-12-08 21:12:57.708184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.894 [2024-12-08 21:12:57.708399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:36.894 [2024-12-08 21:12:57.708531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.312 ms 00:26:36.894 [2024-12-08 21:12:57.708578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.894 [2024-12-08 21:12:57.708652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.894 [2024-12-08 21:12:57.708751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:36.895 [2024-12-08 21:12:57.708803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:36.895 [2024-12-08 21:12:57.708834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.709209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.709336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:36.895 [2024-12-08 21:12:57.709441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:26:36.895 [2024-12-08 21:12:57.709486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.709641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.709689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:36.895 [2024-12-08 21:12:57.709729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:36.895 [2024-12-08 21:12:57.709761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.724257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.724434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:36.895 [2024-12-08 21:12:57.724576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.380 ms 00:26:36.895 [2024-12-08 21:12:57.724623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.737471] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:36.895 [2024-12-08 21:12:57.737666] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:36.895 [2024-12-08 21:12:57.737785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.737824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:36.895 [2024-12-08 21:12:57.737916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.013 ms 00:26:36.895 [2024-12-08 21:12:57.737973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.751944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.752138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:36.895 [2024-12-08 21:12:57.752272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.901 ms 00:26:36.895 [2024-12-08 21:12:57.752319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.764346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.764486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:36.895 [2024-12-08 21:12:57.764604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.955 ms 00:26:36.895 [2024-12-08 21:12:57.764647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.776594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.776748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:36.895 [2024-12-08 21:12:57.776851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.881 ms 00:26:36.895 [2024-12-08 21:12:57.776894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.777339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.777498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:36.895 [2024-12-08 21:12:57.777521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.310 ms 00:26:36.895 [2024-12-08 21:12:57.777534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.836522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.836578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:36.895 [2024-12-08 21:12:57.836595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 58.959 ms 00:26:36.895 [2024-12-08 21:12:57.836605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.846459] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:36.895 [2024-12-08 21:12:57.847041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.847084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:36.895 [2024-12-08 21:12:57.847117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.379 ms 00:26:36.895 [2024-12-08 21:12:57.847134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.847229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.847247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:36.895 [2024-12-08 21:12:57.847260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:36.895 [2024-12-08 21:12:57.847270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.847353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.847369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:36.895 [2024-12-08 21:12:57.847379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:36.895 [2024-12-08 21:12:57.847389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.849061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.849140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:36.895 [2024-12-08 21:12:57.849156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.629 ms 00:26:36.895 [2024-12-08 21:12:57.849165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.849206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.849219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:36.895 [2024-12-08 21:12:57.849231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:36.895 [2024-12-08 21:12:57.849240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.849281] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:36.895 [2024-12-08 21:12:57.849296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.849310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:36.895 [2024-12-08 21:12:57.849320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:36.895 [2024-12-08 21:12:57.849330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.873199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.873235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:36.895 [2024-12-08 21:12:57.873249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.845 ms 00:26:36.895 [2024-12-08 21:12:57.873259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.873338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.895 [2024-12-08 21:12:57.873354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:36.895 [2024-12-08 21:12:57.873365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:36.895 [2024-12-08 21:12:57.873374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.895 [2024-12-08 21:12:57.874569] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 237.447 ms, result 0 00:26:36.895 [2024-12-08 21:12:57.889520] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.895 [2024-12-08 21:12:57.905540] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:36.895 [2024-12-08 21:12:57.913657] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:37.463 21:12:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:37.463 21:12:58 -- common/autotest_common.sh@862 -- # return 0 00:26:37.463 21:12:58 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:37.463 21:12:58 -- ftl/common.sh@95 -- # return 0 00:26:37.463 21:12:58 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:37.463 [2024-12-08 21:12:58.455062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.463 [2024-12-08 21:12:58.455136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:37.463 [2024-12-08 21:12:58.455171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:37.463 [2024-12-08 21:12:58.455182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.463 [2024-12-08 21:12:58.455213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.463 [2024-12-08 21:12:58.455226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:37.463 [2024-12-08 21:12:58.455237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:37.463 [2024-12-08 21:12:58.455252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.463 [2024-12-08 21:12:58.455278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.463 [2024-12-08 21:12:58.455290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:37.463 [2024-12-08 21:12:58.455301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:37.463 [2024-12-08 21:12:58.455310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.463 [2024-12-08 21:12:58.455375] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.303 ms, result 0 00:26:37.463 true 00:26:37.463 21:12:58 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:37.722 { 00:26:37.722 "name": "ftl", 00:26:37.722 "properties": [ 00:26:37.722 { 00:26:37.722 "name": "superblock_version", 00:26:37.722 "value": 5, 00:26:37.722 "read-only": true 00:26:37.722 }, 00:26:37.722 { 00:26:37.722 "name": "base_device", 00:26:37.722 "bands": [ 00:26:37.722 { 00:26:37.722 "id": 0, 00:26:37.722 "state": "CLOSED", 00:26:37.722 "validity": 1.0 00:26:37.722 }, 00:26:37.722 { 00:26:37.722 "id": 1, 00:26:37.722 "state": "CLOSED", 00:26:37.722 "validity": 1.0 00:26:37.722 }, 00:26:37.722 { 00:26:37.722 "id": 2, 00:26:37.722 "state": "CLOSED", 00:26:37.722 "validity": 0.007843137254901933 00:26:37.722 }, 00:26:37.722 { 00:26:37.722 "id": 3, 00:26:37.722 "state": "FREE", 00:26:37.722 "validity": 0.0 00:26:37.722 }, 00:26:37.722 { 00:26:37.722 "id": 4, 00:26:37.722 "state": "FREE", 00:26:37.722 "validity": 0.0 00:26:37.722 }, 00:26:37.722 { 00:26:37.722 "id": 5, 00:26:37.722 "state": "FREE", 00:26:37.722 "validity": 0.0 00:26:37.722 }, 00:26:37.722 { 00:26:37.722 "id": 6, 00:26:37.722 "state": "FREE", 00:26:37.722 "validity": 0.0 00:26:37.722 }, 00:26:37.722 { 00:26:37.722 "id": 7, 00:26:37.722 "state": "FREE", 00:26:37.722 "validity": 0.0 00:26:37.722 }, 00:26:37.722 { 00:26:37.722 "id": 8, 00:26:37.722 "state": "FREE", 00:26:37.722 "validity": 0.0 00:26:37.722 }, 00:26:37.722 { 00:26:37.723 "id": 9, 00:26:37.723 "state": "FREE", 00:26:37.723 "validity": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 10, 00:26:37.723 "state": "FREE", 00:26:37.723 "validity": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 11, 00:26:37.723 "state": "FREE", 00:26:37.723 "validity": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 12, 00:26:37.723 "state": "FREE", 00:26:37.723 "validity": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 13, 00:26:37.723 "state": "FREE", 00:26:37.723 "validity": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 14, 00:26:37.723 "state": "FREE", 00:26:37.723 "validity": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 15, 00:26:37.723 "state": "FREE", 00:26:37.723 "validity": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 16, 00:26:37.723 "state": "FREE", 00:26:37.723 "validity": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 17, 00:26:37.723 "state": "FREE", 00:26:37.723 "validity": 0.0 00:26:37.723 } 00:26:37.723 ], 00:26:37.723 "read-only": true 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "name": "cache_device", 00:26:37.723 "type": "bdev", 00:26:37.723 "chunks": [ 00:26:37.723 { 00:26:37.723 "id": 0, 00:26:37.723 "state": "OPEN", 00:26:37.723 "utilization": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 1, 00:26:37.723 "state": "OPEN", 00:26:37.723 "utilization": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 2, 00:26:37.723 "state": "FREE", 00:26:37.723 "utilization": 0.0 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "id": 3, 00:26:37.723 "state": "FREE", 00:26:37.723 "utilization": 0.0 00:26:37.723 } 00:26:37.723 ], 00:26:37.723 "read-only": true 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "name": "verbose_mode", 00:26:37.723 "value": true, 00:26:37.723 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:37.723 }, 00:26:37.723 { 00:26:37.723 "name": "prep_upgrade_on_shutdown", 00:26:37.723 "value": false, 00:26:37.723 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:37.723 } 00:26:37.723 ] 00:26:37.723 } 00:26:37.723 21:12:58 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:37.723 21:12:58 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:37.723 21:12:58 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:37.989 21:12:58 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:37.989 21:12:58 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:37.989 21:12:58 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:37.989 21:12:58 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:37.989 21:12:58 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:38.251 Validate MD5 checksum, iteration 1 00:26:38.251 21:12:59 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:38.251 21:12:59 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:38.251 21:12:59 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:38.251 21:12:59 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:38.251 21:12:59 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:38.251 21:12:59 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:38.251 21:12:59 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:38.251 21:12:59 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:38.251 21:12:59 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:38.251 21:12:59 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:38.251 21:12:59 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:38.251 21:12:59 -- ftl/common.sh@154 -- # return 0 00:26:38.251 21:12:59 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:38.251 [2024-12-08 21:12:59.275454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:38.251 [2024-12-08 21:12:59.275839] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78812 ] 00:26:38.509 [2024-12-08 21:12:59.431136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.767 [2024-12-08 21:12:59.584258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.144  [2024-12-08T21:13:02.121Z] Copying: 519/1024 [MB] (519 MBps) [2024-12-08T21:13:02.121Z] Copying: 1019/1024 [MB] (500 MBps) [2024-12-08T21:13:03.496Z] Copying: 1024/1024 [MB] (average 509 MBps) 00:26:42.453 00:26:42.453 21:13:03 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:42.453 21:13:03 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:44.359 21:13:05 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:44.359 21:13:05 -- ftl/upgrade_shutdown.sh@103 -- # sum=eee211599f501ed100665ba7258e5300 00:26:44.359 21:13:05 -- ftl/upgrade_shutdown.sh@105 -- # [[ eee211599f501ed100665ba7258e5300 != \e\e\e\2\1\1\5\9\9\f\5\0\1\e\d\1\0\0\6\6\5\b\a\7\2\5\8\e\5\3\0\0 ]] 00:26:44.359 21:13:05 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:44.359 Validate MD5 checksum, iteration 2 00:26:44.359 21:13:05 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:44.359 21:13:05 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:44.359 21:13:05 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:44.359 21:13:05 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:44.360 21:13:05 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:44.360 21:13:05 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:44.360 21:13:05 -- ftl/common.sh@154 -- # return 0 00:26:44.360 21:13:05 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:44.360 [2024-12-08 21:13:05.315443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:44.360 [2024-12-08 21:13:05.315600] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78882 ] 00:26:44.620 [2024-12-08 21:13:05.487199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.880 [2024-12-08 21:13:05.681663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.259  [2024-12-08T21:13:08.239Z] Copying: 519/1024 [MB] (519 MBps) [2024-12-08T21:13:08.239Z] Copying: 1011/1024 [MB] (492 MBps) [2024-12-08T21:13:10.787Z] Copying: 1024/1024 [MB] (average 505 MBps) 00:26:49.744 00:26:49.744 21:13:10 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:49.744 21:13:10 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:51.651 21:13:12 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:51.651 21:13:12 -- ftl/upgrade_shutdown.sh@103 -- # sum=895def0507ff5cd6d4ac5183f1728c12 00:26:51.651 21:13:12 -- ftl/upgrade_shutdown.sh@105 -- # [[ 895def0507ff5cd6d4ac5183f1728c12 != \8\9\5\d\e\f\0\5\0\7\f\f\5\c\d\6\d\4\a\c\5\1\8\3\f\1\7\2\8\c\1\2 ]] 00:26:51.651 21:13:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:51.651 21:13:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:51.651 21:13:12 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:51.651 21:13:12 -- ftl/common.sh@137 -- # [[ -n 78767 ]] 00:26:51.651 21:13:12 -- ftl/common.sh@138 -- # kill -9 78767 00:26:51.651 21:13:12 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:51.651 21:13:12 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:51.651 21:13:12 -- ftl/common.sh@81 -- # local base_bdev= 00:26:51.651 21:13:12 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:51.651 21:13:12 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:51.651 21:13:12 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:51.651 21:13:12 -- ftl/common.sh@89 -- # spdk_tgt_pid=78960 00:26:51.651 21:13:12 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:51.651 21:13:12 -- ftl/common.sh@91 -- # waitforlisten 78960 00:26:51.651 21:13:12 -- common/autotest_common.sh@829 -- # '[' -z 78960 ']' 00:26:51.651 21:13:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.651 21:13:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.651 21:13:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.651 21:13:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.651 21:13:12 -- common/autotest_common.sh@10 -- # set +x 00:26:51.651 [2024-12-08 21:13:12.390017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:51.651 [2024-12-08 21:13:12.390377] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78960 ] 00:26:51.651 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 78767 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:51.651 [2024-12-08 21:13:12.542394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.651 [2024-12-08 21:13:12.683213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:51.651 [2024-12-08 21:13:12.683689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.591 [2024-12-08 21:13:13.326656] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:52.591 [2024-12-08 21:13:13.326893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:52.591 [2024-12-08 21:13:13.464816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.464858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:52.591 [2024-12-08 21:13:13.464891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:52.591 [2024-12-08 21:13:13.464902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.464969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.464990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:52.591 [2024-12-08 21:13:13.465000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:52.591 [2024-12-08 21:13:13.465010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.465039] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:52.591 [2024-12-08 21:13:13.465900] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:52.591 [2024-12-08 21:13:13.465930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.465942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:52.591 [2024-12-08 21:13:13.465953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.897 ms 00:26:52.591 [2024-12-08 21:13:13.465963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.466393] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:52.591 [2024-12-08 21:13:13.483801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.483838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:52.591 [2024-12-08 21:13:13.483870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.410 ms 00:26:52.591 [2024-12-08 21:13:13.483879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.493524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.493558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:52.591 [2024-12-08 21:13:13.493588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:52.591 [2024-12-08 21:13:13.493597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.494012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.494035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:52.591 [2024-12-08 21:13:13.494047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.325 ms 00:26:52.591 [2024-12-08 21:13:13.494057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.494155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.494173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:52.591 [2024-12-08 21:13:13.494184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:52.591 [2024-12-08 21:13:13.494197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.494231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.494245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:52.591 [2024-12-08 21:13:13.494256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:52.591 [2024-12-08 21:13:13.494265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.494317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:52.591 [2024-12-08 21:13:13.497826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.497857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:52.591 [2024-12-08 21:13:13.497886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.522 ms 00:26:52.591 [2024-12-08 21:13:13.497896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.497927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.497942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:52.591 [2024-12-08 21:13:13.497955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:52.591 [2024-12-08 21:13:13.497964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.498005] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:52.591 [2024-12-08 21:13:13.498032] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:52.591 [2024-12-08 21:13:13.498065] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:52.591 [2024-12-08 21:13:13.498097] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:52.591 [2024-12-08 21:13:13.498202] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:52.591 [2024-12-08 21:13:13.498221] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:52.591 [2024-12-08 21:13:13.498237] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:52.591 [2024-12-08 21:13:13.498250] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:52.591 [2024-12-08 21:13:13.498262] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:52.591 [2024-12-08 21:13:13.498273] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:52.591 [2024-12-08 21:13:13.498282] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:52.591 [2024-12-08 21:13:13.498291] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:52.591 [2024-12-08 21:13:13.498300] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:52.591 [2024-12-08 21:13:13.498310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.498320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:52.591 [2024-12-08 21:13:13.498330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:26:52.591 [2024-12-08 21:13:13.498342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.498407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.591 [2024-12-08 21:13:13.498435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:52.591 [2024-12-08 21:13:13.498445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:52.591 [2024-12-08 21:13:13.498455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.591 [2024-12-08 21:13:13.498566] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:52.591 [2024-12-08 21:13:13.498589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:52.591 [2024-12-08 21:13:13.498601] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:52.591 [2024-12-08 21:13:13.498611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:52.591 [2024-12-08 21:13:13.498627] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:52.591 [2024-12-08 21:13:13.498636] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:52.591 [2024-12-08 21:13:13.498646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:52.591 [2024-12-08 21:13:13.498656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:52.591 [2024-12-08 21:13:13.498665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:52.591 [2024-12-08 21:13:13.498674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:52.591 [2024-12-08 21:13:13.498683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:52.591 [2024-12-08 21:13:13.498692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:52.591 [2024-12-08 21:13:13.498703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:52.591 [2024-12-08 21:13:13.498713] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:52.591 [2024-12-08 21:13:13.498721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:52.591 [2024-12-08 21:13:13.498730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:52.591 [2024-12-08 21:13:13.498739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:52.591 [2024-12-08 21:13:13.498749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:52.591 [2024-12-08 21:13:13.498758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:52.591 [2024-12-08 21:13:13.498767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:52.591 [2024-12-08 21:13:13.498776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:52.591 [2024-12-08 21:13:13.498785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:52.591 [2024-12-08 21:13:13.498795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:52.591 [2024-12-08 21:13:13.498804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:52.591 [2024-12-08 21:13:13.498812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:52.591 [2024-12-08 21:13:13.498821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:52.591 [2024-12-08 21:13:13.498831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:52.591 [2024-12-08 21:13:13.498840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:52.592 [2024-12-08 21:13:13.498849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:52.592 [2024-12-08 21:13:13.498858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:52.592 [2024-12-08 21:13:13.498866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:52.592 [2024-12-08 21:13:13.498875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:52.592 [2024-12-08 21:13:13.498884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:52.592 [2024-12-08 21:13:13.498893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:52.592 [2024-12-08 21:13:13.498901] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:52.592 [2024-12-08 21:13:13.498910] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:52.592 [2024-12-08 21:13:13.498919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:52.592 [2024-12-08 21:13:13.498929] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:52.592 [2024-12-08 21:13:13.498939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:52.592 [2024-12-08 21:13:13.498948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:52.592 [2024-12-08 21:13:13.498957] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:52.592 [2024-12-08 21:13:13.498967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:52.592 [2024-12-08 21:13:13.498976] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:52.592 [2024-12-08 21:13:13.498987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:52.592 [2024-12-08 21:13:13.498998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:52.592 [2024-12-08 21:13:13.499008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:52.592 [2024-12-08 21:13:13.499016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:52.592 [2024-12-08 21:13:13.499026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:52.592 [2024-12-08 21:13:13.499035] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:52.592 [2024-12-08 21:13:13.499044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:52.592 [2024-12-08 21:13:13.499054] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:52.592 [2024-12-08 21:13:13.499067] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:52.592 [2024-12-08 21:13:13.499094] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:52.592 [2024-12-08 21:13:13.499106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:52.592 [2024-12-08 21:13:13.499116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:52.592 [2024-12-08 21:13:13.499137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:52.592 [2024-12-08 21:13:13.499148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:52.592 [2024-12-08 21:13:13.499159] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:52.592 [2024-12-08 21:13:13.499169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:52.592 [2024-12-08 21:13:13.499179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:52.592 [2024-12-08 21:13:13.499189] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:52.592 [2024-12-08 21:13:13.499199] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:52.592 [2024-12-08 21:13:13.499209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:52.592 [2024-12-08 21:13:13.499219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:52.592 [2024-12-08 21:13:13.499229] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:52.592 [2024-12-08 21:13:13.499239] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:52.592 [2024-12-08 21:13:13.499249] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:52.592 [2024-12-08 21:13:13.499261] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:52.592 [2024-12-08 21:13:13.499271] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:52.592 [2024-12-08 21:13:13.499281] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:52.592 [2024-12-08 21:13:13.499291] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:52.592 [2024-12-08 21:13:13.499302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.499312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:52.592 [2024-12-08 21:13:13.499322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.776 ms 00:26:52.592 [2024-12-08 21:13:13.499336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.592 [2024-12-08 21:13:13.514225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.514261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:52.592 [2024-12-08 21:13:13.514297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.832 ms 00:26:52.592 [2024-12-08 21:13:13.514307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.592 [2024-12-08 21:13:13.514349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.514362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:52.592 [2024-12-08 21:13:13.514371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:52.592 [2024-12-08 21:13:13.514381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.592 [2024-12-08 21:13:13.545993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.546036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:52.592 [2024-12-08 21:13:13.546068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.553 ms 00:26:52.592 [2024-12-08 21:13:13.546078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.592 [2024-12-08 21:13:13.546167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.546184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:52.592 [2024-12-08 21:13:13.546195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:52.592 [2024-12-08 21:13:13.546220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.592 [2024-12-08 21:13:13.546351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.546367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:52.592 [2024-12-08 21:13:13.546378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:26:52.592 [2024-12-08 21:13:13.546387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.592 [2024-12-08 21:13:13.546453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.546471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:52.592 [2024-12-08 21:13:13.546482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:52.592 [2024-12-08 21:13:13.546491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.592 [2024-12-08 21:13:13.561623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.561661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:52.592 [2024-12-08 21:13:13.561676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.105 ms 00:26:52.592 [2024-12-08 21:13:13.561685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.592 [2024-12-08 21:13:13.561804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.561824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:52.592 [2024-12-08 21:13:13.561834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:52.592 [2024-12-08 21:13:13.561843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.592 [2024-12-08 21:13:13.578252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.578288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:52.592 [2024-12-08 21:13:13.578303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.389 ms 00:26:52.592 [2024-12-08 21:13:13.578313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.592 [2024-12-08 21:13:13.587665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.592 [2024-12-08 21:13:13.587830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:52.592 [2024-12-08 21:13:13.587854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:26:52.592 [2024-12-08 21:13:13.587865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.852 [2024-12-08 21:13:13.647848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.852 [2024-12-08 21:13:13.647903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:52.852 [2024-12-08 21:13:13.647919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 59.906 ms 00:26:52.852 [2024-12-08 21:13:13.647928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.852 [2024-12-08 21:13:13.648021] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:52.852 [2024-12-08 21:13:13.648062] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:52.852 [2024-12-08 21:13:13.648158] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:52.852 [2024-12-08 21:13:13.648201] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:52.852 [2024-12-08 21:13:13.648230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.852 [2024-12-08 21:13:13.648240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:52.852 [2024-12-08 21:13:13.648256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.243 ms 00:26:52.852 [2024-12-08 21:13:13.648269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.852 [2024-12-08 21:13:13.648349] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:52.852 [2024-12-08 21:13:13.648368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.852 [2024-12-08 21:13:13.648379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:52.852 [2024-12-08 21:13:13.648390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:52.852 [2024-12-08 21:13:13.648415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.852 [2024-12-08 21:13:13.663788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.852 [2024-12-08 21:13:13.663825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:52.852 [2024-12-08 21:13:13.663840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.315 ms 00:26:52.852 [2024-12-08 21:13:13.663850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.852 [2024-12-08 21:13:13.673047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.852 [2024-12-08 21:13:13.673095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:52.852 [2024-12-08 21:13:13.673127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:52.852 [2024-12-08 21:13:13.673137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.852 [2024-12-08 21:13:13.673218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.852 [2024-12-08 21:13:13.673235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:26:52.852 [2024-12-08 21:13:13.673245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:52.852 [2024-12-08 21:13:13.673255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.852 [2024-12-08 21:13:13.673388] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:26:53.419 [2024-12-08 21:13:14.239515] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:26:53.419 [2024-12-08 21:13:14.239694] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:26:53.999 [2024-12-08 21:13:14.802992] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:26:53.999 [2024-12-08 21:13:14.803157] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:53.999 [2024-12-08 21:13:14.803178] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:53.999 [2024-12-08 21:13:14.803192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-12-08 21:13:14.803205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:53.999 [2024-12-08 21:13:14.803235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1129.920 ms 00:26:53.999 [2024-12-08 21:13:14.803275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-12-08 21:13:14.803327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-12-08 21:13:14.803342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:53.999 [2024-12-08 21:13:14.803353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:53.999 [2024-12-08 21:13:14.803363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-12-08 21:13:14.814244] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:53.999 [2024-12-08 21:13:14.814378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-12-08 21:13:14.814399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:54.000 [2024-12-08 21:13:14.814412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.994 ms 00:26:54.000 [2024-12-08 21:13:14.814422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.000 [2024-12-08 21:13:14.815129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.000 [2024-12-08 21:13:14.815179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:26:54.000 [2024-12-08 21:13:14.815195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.564 ms 00:26:54.000 [2024-12-08 21:13:14.815205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.000 [2024-12-08 21:13:14.817557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.000 [2024-12-08 21:13:14.817583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:54.000 [2024-12-08 21:13:14.817614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.321 ms 00:26:54.000 [2024-12-08 21:13:14.817622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.000 [2024-12-08 21:13:14.843529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.000 [2024-12-08 21:13:14.843569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:26:54.000 [2024-12-08 21:13:14.843602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.880 ms 00:26:54.000 [2024-12-08 21:13:14.843612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.000 [2024-12-08 21:13:14.843719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.000 [2024-12-08 21:13:14.843737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:54.000 [2024-12-08 21:13:14.843748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:54.000 [2024-12-08 21:13:14.843758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.000 [2024-12-08 21:13:14.845580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.000 [2024-12-08 21:13:14.845630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:54.000 [2024-12-08 21:13:14.845660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.801 ms 00:26:54.000 [2024-12-08 21:13:14.845670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.000 [2024-12-08 21:13:14.845708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.000 [2024-12-08 21:13:14.845722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:54.000 [2024-12-08 21:13:14.845733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:54.000 [2024-12-08 21:13:14.845742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.000 [2024-12-08 21:13:14.845796] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:54.000 [2024-12-08 21:13:14.845813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.000 [2024-12-08 21:13:14.845822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:54.000 [2024-12-08 21:13:14.845837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:54.000 [2024-12-08 21:13:14.845846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.000 [2024-12-08 21:13:14.845905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.000 [2024-12-08 21:13:14.845921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:54.000 [2024-12-08 21:13:14.845931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:54.000 [2024-12-08 21:13:14.845955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.000 [2024-12-08 21:13:14.847201] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1381.847 ms, result 0 00:26:54.000 [2024-12-08 21:13:14.859947] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.000 [2024-12-08 21:13:14.875943] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:54.000 [2024-12-08 21:13:14.884090] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:54.607 Validate MD5 checksum, iteration 1 00:26:54.607 21:13:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:54.607 21:13:15 -- common/autotest_common.sh@862 -- # return 0 00:26:54.607 21:13:15 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:54.607 21:13:15 -- ftl/common.sh@95 -- # return 0 00:26:54.607 21:13:15 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:54.607 21:13:15 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:54.607 21:13:15 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:54.607 21:13:15 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:54.607 21:13:15 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:54.607 21:13:15 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:54.607 21:13:15 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:54.607 21:13:15 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:54.607 21:13:15 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:54.607 21:13:15 -- ftl/common.sh@154 -- # return 0 00:26:54.607 21:13:15 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:54.607 [2024-12-08 21:13:15.633222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:54.607 [2024-12-08 21:13:15.633611] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79000 ] 00:26:54.865 [2024-12-08 21:13:15.791856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.124 [2024-12-08 21:13:15.998338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.499  [2024-12-08T21:13:18.917Z] Copying: 510/1024 [MB] (510 MBps) [2024-12-08T21:13:18.917Z] Copying: 1011/1024 [MB] (501 MBps) [2024-12-08T21:13:19.854Z] Copying: 1024/1024 [MB] (average 505 MBps) 00:26:58.811 00:26:58.811 21:13:19 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:58.811 21:13:19 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:00.716 21:13:21 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:00.716 21:13:21 -- ftl/upgrade_shutdown.sh@103 -- # sum=eee211599f501ed100665ba7258e5300 00:27:00.716 21:13:21 -- ftl/upgrade_shutdown.sh@105 -- # [[ eee211599f501ed100665ba7258e5300 != \e\e\e\2\1\1\5\9\9\f\5\0\1\e\d\1\0\0\6\6\5\b\a\7\2\5\8\e\5\3\0\0 ]] 00:27:00.716 Validate MD5 checksum, iteration 2 00:27:00.716 21:13:21 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:00.716 21:13:21 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:00.716 21:13:21 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:00.716 21:13:21 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:00.716 21:13:21 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:00.716 21:13:21 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:00.716 21:13:21 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:00.716 21:13:21 -- ftl/common.sh@154 -- # return 0 00:27:00.716 21:13:21 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:00.716 [2024-12-08 21:13:21.742120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:00.716 [2024-12-08 21:13:21.742494] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79067 ] 00:27:00.975 [2024-12-08 21:13:21.914133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.234 [2024-12-08 21:13:22.105930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.612  [2024-12-08T21:13:25.036Z] Copying: 524/1024 [MB] (524 MBps) [2024-12-08T21:13:25.973Z] Copying: 1024/1024 [MB] (average 516 MBps) 00:27:04.930 00:27:04.930 21:13:25 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:04.930 21:13:25 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@103 -- # sum=895def0507ff5cd6d4ac5183f1728c12 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@105 -- # [[ 895def0507ff5cd6d4ac5183f1728c12 != \8\9\5\d\e\f\0\5\0\7\f\f\5\c\d\6\d\4\a\c\5\1\8\3\f\1\7\2\8\c\1\2 ]] 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:06.835 21:13:27 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:06.835 21:13:27 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:06.835 21:13:27 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:06.835 21:13:27 -- ftl/common.sh@130 -- # [[ -n 78960 ]] 00:27:06.835 21:13:27 -- ftl/common.sh@131 -- # killprocess 78960 00:27:06.835 21:13:27 -- common/autotest_common.sh@936 -- # '[' -z 78960 ']' 00:27:06.835 21:13:27 -- common/autotest_common.sh@940 -- # kill -0 78960 00:27:06.835 21:13:27 -- common/autotest_common.sh@941 -- # uname 00:27:06.835 21:13:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:06.835 21:13:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78960 00:27:06.835 killing process with pid 78960 00:27:06.835 21:13:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:06.835 21:13:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:06.835 21:13:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78960' 00:27:06.835 21:13:27 -- common/autotest_common.sh@955 -- # kill 78960 00:27:06.835 21:13:27 -- common/autotest_common.sh@960 -- # wait 78960 00:27:07.404 [2024-12-08 21:13:28.388295] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:07.404 [2024-12-08 21:13:28.403472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.404 [2024-12-08 21:13:28.403511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:07.404 [2024-12-08 21:13:28.403529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:07.404 [2024-12-08 21:13:28.403539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.404 [2024-12-08 21:13:28.403564] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:07.404 [2024-12-08 21:13:28.406248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.404 [2024-12-08 21:13:28.406281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:07.404 [2024-12-08 21:13:28.406309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.668 ms 00:27:07.404 [2024-12-08 21:13:28.406319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.404 [2024-12-08 21:13:28.406559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.404 [2024-12-08 21:13:28.406583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:07.404 [2024-12-08 21:13:28.406594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.216 ms 00:27:07.404 [2024-12-08 21:13:28.406603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.404 [2024-12-08 21:13:28.407839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.404 [2024-12-08 21:13:28.407994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:07.404 [2024-12-08 21:13:28.408170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.218 ms 00:27:07.404 [2024-12-08 21:13:28.408218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.404 [2024-12-08 21:13:28.409310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.404 [2024-12-08 21:13:28.409494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:07.404 [2024-12-08 21:13:28.409595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.025 ms 00:27:07.404 [2024-12-08 21:13:28.409638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.404 [2024-12-08 21:13:28.419684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.404 [2024-12-08 21:13:28.419829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:07.404 [2024-12-08 21:13:28.419934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.970 ms 00:27:07.404 [2024-12-08 21:13:28.419977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.404 [2024-12-08 21:13:28.425533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.404 [2024-12-08 21:13:28.425682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:07.404 [2024-12-08 21:13:28.425706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.491 ms 00:27:07.404 [2024-12-08 21:13:28.425716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.404 [2024-12-08 21:13:28.425795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.404 [2024-12-08 21:13:28.425811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:07.404 [2024-12-08 21:13:28.425822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:07.404 [2024-12-08 21:13:28.425840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.404 [2024-12-08 21:13:28.435902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.404 [2024-12-08 21:13:28.435936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:07.404 [2024-12-08 21:13:28.435949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.043 ms 00:27:07.404 [2024-12-08 21:13:28.435957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.664 [2024-12-08 21:13:28.446537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.664 [2024-12-08 21:13:28.446726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:07.664 [2024-12-08 21:13:28.446750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.546 ms 00:27:07.664 [2024-12-08 21:13:28.446760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.664 [2024-12-08 21:13:28.457853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.664 [2024-12-08 21:13:28.457890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:07.664 [2024-12-08 21:13:28.457921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.052 ms 00:27:07.664 [2024-12-08 21:13:28.457931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.664 [2024-12-08 21:13:28.469645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.664 [2024-12-08 21:13:28.469677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:07.664 [2024-12-08 21:13:28.469707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.652 ms 00:27:07.664 [2024-12-08 21:13:28.469716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.664 [2024-12-08 21:13:28.469750] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:07.664 [2024-12-08 21:13:28.469769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:07.664 [2024-12-08 21:13:28.469788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:07.664 [2024-12-08 21:13:28.469798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:07.664 [2024-12-08 21:13:28.469808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:07.664 [2024-12-08 21:13:28.469962] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:07.664 [2024-12-08 21:13:28.469971] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b5946c2b-86c0-48f0-ab7d-7b4bbeb00991 00:27:07.664 [2024-12-08 21:13:28.469981] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:07.664 [2024-12-08 21:13:28.469989] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:07.664 [2024-12-08 21:13:28.469998] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:07.664 [2024-12-08 21:13:28.470007] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:07.664 [2024-12-08 21:13:28.470015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:07.664 [2024-12-08 21:13:28.470024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:07.664 [2024-12-08 21:13:28.470033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:07.664 [2024-12-08 21:13:28.470041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:07.665 [2024-12-08 21:13:28.470049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:07.665 [2024-12-08 21:13:28.470060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.665 [2024-12-08 21:13:28.470069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:07.665 [2024-12-08 21:13:28.470129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:27:07.665 [2024-12-08 21:13:28.470145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.484584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.665 [2024-12-08 21:13:28.484758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:07.665 [2024-12-08 21:13:28.484783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.417 ms 00:27:07.665 [2024-12-08 21:13:28.484794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.485012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.665 [2024-12-08 21:13:28.485031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:07.665 [2024-12-08 21:13:28.485049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:27:07.665 [2024-12-08 21:13:28.485059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.532006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.532046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:07.665 [2024-12-08 21:13:28.532075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.532097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.532192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.532222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:07.665 [2024-12-08 21:13:28.532239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.532250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.532338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.532357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:07.665 [2024-12-08 21:13:28.532368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.532378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.532412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.532441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:07.665 [2024-12-08 21:13:28.532467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.532497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.618913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.618961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:07.665 [2024-12-08 21:13:28.619008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.619018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.650488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.650522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:07.665 [2024-12-08 21:13:28.650560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.650569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.650639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.650656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:07.665 [2024-12-08 21:13:28.650667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.650676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.650724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.650743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:07.665 [2024-12-08 21:13:28.650753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.650762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.650866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.650883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:07.665 [2024-12-08 21:13:28.650893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.650902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.650955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.650970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:07.665 [2024-12-08 21:13:28.650981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.650990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.651034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.651048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:07.665 [2024-12-08 21:13:28.651058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.651067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.651172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:07.665 [2024-12-08 21:13:28.651204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:07.665 [2024-12-08 21:13:28.651214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:07.665 [2024-12-08 21:13:28.651224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.665 [2024-12-08 21:13:28.651389] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 247.878 ms, result 0 00:27:08.612 21:13:29 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:08.612 21:13:29 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:08.612 21:13:29 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:08.612 21:13:29 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:08.612 21:13:29 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:08.612 21:13:29 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:08.612 Remove shared memory files 00:27:08.612 21:13:29 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:08.612 21:13:29 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:08.612 21:13:29 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:08.612 21:13:29 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:08.612 21:13:29 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78767 00:27:08.612 21:13:29 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:08.612 21:13:29 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:08.612 ************************************ 00:27:08.612 END TEST ftl_upgrade_shutdown 00:27:08.612 ************************************ 00:27:08.612 00:27:08.612 real 1m22.325s 00:27:08.612 user 1m59.750s 00:27:08.612 sys 0m20.768s 00:27:08.612 21:13:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:08.612 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:27:08.612 21:13:29 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:08.612 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:08.612 21:13:29 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:08.612 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:08.612 21:13:29 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:08.612 21:13:29 -- ftl/ftl.sh@14 -- # killprocess 71091 00:27:08.612 21:13:29 -- common/autotest_common.sh@936 -- # '[' -z 71091 ']' 00:27:08.612 Process with pid 71091 is not found 00:27:08.612 21:13:29 -- common/autotest_common.sh@940 -- # kill -0 71091 00:27:08.612 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (71091) - No such process 00:27:08.612 21:13:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 71091 is not found' 00:27:08.612 21:13:29 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:08.612 21:13:29 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79175 00:27:08.612 21:13:29 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:08.612 21:13:29 -- ftl/ftl.sh@20 -- # waitforlisten 79175 00:27:08.612 21:13:29 -- common/autotest_common.sh@829 -- # '[' -z 79175 ']' 00:27:08.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.612 21:13:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.612 21:13:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.612 21:13:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.612 21:13:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.612 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:27:08.872 [2024-12-08 21:13:29.690855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:08.872 [2024-12-08 21:13:29.691031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79175 ] 00:27:08.872 [2024-12-08 21:13:29.854993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.131 [2024-12-08 21:13:30.003491] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:09.131 [2024-12-08 21:13:30.003721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.506 21:13:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:10.506 21:13:31 -- common/autotest_common.sh@862 -- # return 0 00:27:10.506 21:13:31 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:10.506 nvme0n1 00:27:10.506 21:13:31 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:10.506 21:13:31 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:10.506 21:13:31 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:10.765 21:13:31 -- ftl/common.sh@28 -- # stores=3393f6e2-80d8-4bb0-b879-5e8776fa17b2 00:27:10.765 21:13:31 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:10.765 21:13:31 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3393f6e2-80d8-4bb0-b879-5e8776fa17b2 00:27:11.023 21:13:31 -- ftl/ftl.sh@23 -- # killprocess 79175 00:27:11.023 21:13:32 -- common/autotest_common.sh@936 -- # '[' -z 79175 ']' 00:27:11.023 21:13:32 -- common/autotest_common.sh@940 -- # kill -0 79175 00:27:11.023 21:13:32 -- common/autotest_common.sh@941 -- # uname 00:27:11.023 21:13:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:11.023 21:13:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79175 00:27:11.023 killing process with pid 79175 00:27:11.023 21:13:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:11.023 21:13:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:11.023 21:13:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79175' 00:27:11.023 21:13:32 -- common/autotest_common.sh@955 -- # kill 79175 00:27:11.023 21:13:32 -- common/autotest_common.sh@960 -- # wait 79175 00:27:12.923 21:13:33 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:12.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:13.181 Waiting for block devices as requested 00:27:13.181 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.181 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.181 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.438 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:18.708 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:18.708 21:13:39 -- ftl/ftl.sh@28 -- # remove_shm 00:27:18.708 Remove shared memory files 00:27:18.708 21:13:39 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:18.708 21:13:39 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:18.708 21:13:39 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:18.708 21:13:39 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:18.708 21:13:39 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:18.708 21:13:39 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:18.708 ************************************ 00:27:18.708 END TEST ftl 00:27:18.708 ************************************ 00:27:18.708 00:27:18.708 real 11m46.001s 00:27:18.708 user 14m33.143s 00:27:18.708 sys 1m27.938s 00:27:18.708 21:13:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:18.708 21:13:39 -- common/autotest_common.sh@10 -- # set +x 00:27:18.708 21:13:39 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:18.708 21:13:39 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:18.708 21:13:39 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:18.708 21:13:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:18.708 21:13:39 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:18.708 21:13:39 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:18.708 21:13:39 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:18.708 21:13:39 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:18.708 21:13:39 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:18.708 21:13:39 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:18.708 21:13:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.708 21:13:39 -- common/autotest_common.sh@10 -- # set +x 00:27:18.708 21:13:39 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:18.708 21:13:39 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:18.708 21:13:39 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:18.708 21:13:39 -- common/autotest_common.sh@10 -- # set +x 00:27:20.087 INFO: APP EXITING 00:27:20.087 INFO: killing all VMs 00:27:20.087 INFO: killing vhost app 00:27:20.087 INFO: EXIT DONE 00:27:21.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:21.025 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:21.025 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:21.025 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:21.025 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:21.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:21.853 Cleaning 00:27:21.853 Removing: /var/run/dpdk/spdk0/config 00:27:21.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:21.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:21.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:21.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:21.853 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:21.853 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:21.853 Removing: /var/run/dpdk/spdk0 00:27:21.853 Removing: /var/run/dpdk/spdk_pid56277 00:27:21.853 Removing: /var/run/dpdk/spdk_pid56478 00:27:21.853 Removing: /var/run/dpdk/spdk_pid56802 00:27:21.853 Removing: /var/run/dpdk/spdk_pid56906 00:27:21.853 Removing: /var/run/dpdk/spdk_pid57002 00:27:21.853 Removing: /var/run/dpdk/spdk_pid57119 00:27:21.853 Removing: /var/run/dpdk/spdk_pid57215 00:27:21.853 Removing: /var/run/dpdk/spdk_pid57255 00:27:21.853 Removing: /var/run/dpdk/spdk_pid57297 00:27:21.853 Removing: /var/run/dpdk/spdk_pid57371 00:27:21.853 Removing: /var/run/dpdk/spdk_pid57478 00:27:21.853 Removing: /var/run/dpdk/spdk_pid57924 00:27:21.853 Removing: /var/run/dpdk/spdk_pid57988 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58064 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58075 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58195 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58210 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58332 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58348 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58401 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58432 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58493 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58518 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58706 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58743 00:27:21.853 Removing: /var/run/dpdk/spdk_pid58831 00:27:21.854 Removing: /var/run/dpdk/spdk_pid58895 00:27:21.854 Removing: /var/run/dpdk/spdk_pid58932 00:27:21.854 Removing: /var/run/dpdk/spdk_pid58999 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59025 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59066 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59092 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59143 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59169 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59211 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59237 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59279 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59305 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59346 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59372 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59414 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59445 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59486 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59512 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59553 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59579 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59620 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59646 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59696 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59722 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59763 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59793 00:27:21.854 Removing: /var/run/dpdk/spdk_pid59835 00:27:22.113 Removing: /var/run/dpdk/spdk_pid59861 00:27:22.113 Removing: /var/run/dpdk/spdk_pid59902 00:27:22.113 Removing: /var/run/dpdk/spdk_pid59928 00:27:22.114 Removing: /var/run/dpdk/spdk_pid59975 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60001 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60042 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60068 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60109 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60138 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60187 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60217 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60261 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60287 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60332 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60359 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60401 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60485 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60608 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60778 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60875 00:27:22.114 Removing: /var/run/dpdk/spdk_pid60912 00:27:22.114 Removing: /var/run/dpdk/spdk_pid61355 00:27:22.114 Removing: /var/run/dpdk/spdk_pid61542 00:27:22.114 Removing: /var/run/dpdk/spdk_pid61646 00:27:22.114 Removing: /var/run/dpdk/spdk_pid61699 00:27:22.114 Removing: /var/run/dpdk/spdk_pid61725 00:27:22.114 Removing: /var/run/dpdk/spdk_pid61808 00:27:22.114 Removing: /var/run/dpdk/spdk_pid62461 00:27:22.114 Removing: /var/run/dpdk/spdk_pid62503 00:27:22.114 Removing: /var/run/dpdk/spdk_pid62998 00:27:22.114 Removing: /var/run/dpdk/spdk_pid63107 00:27:22.114 Removing: /var/run/dpdk/spdk_pid63211 00:27:22.114 Removing: /var/run/dpdk/spdk_pid63265 00:27:22.114 Removing: /var/run/dpdk/spdk_pid63296 00:27:22.114 Removing: /var/run/dpdk/spdk_pid63322 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65267 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65415 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65419 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65441 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65478 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65486 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65499 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65545 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65549 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65565 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65611 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65615 00:27:22.114 Removing: /var/run/dpdk/spdk_pid65627 00:27:22.114 Removing: /var/run/dpdk/spdk_pid67062 00:27:22.114 Removing: /var/run/dpdk/spdk_pid67171 00:27:22.114 Removing: /var/run/dpdk/spdk_pid67307 00:27:22.114 Removing: /var/run/dpdk/spdk_pid67404 00:27:22.114 Removing: /var/run/dpdk/spdk_pid67508 00:27:22.114 Removing: /var/run/dpdk/spdk_pid67601 00:27:22.114 Removing: /var/run/dpdk/spdk_pid67722 00:27:22.114 Removing: /var/run/dpdk/spdk_pid67796 00:27:22.114 Removing: /var/run/dpdk/spdk_pid67943 00:27:22.114 Removing: /var/run/dpdk/spdk_pid68332 00:27:22.114 Removing: /var/run/dpdk/spdk_pid68363 00:27:22.114 Removing: /var/run/dpdk/spdk_pid68824 00:27:22.114 Removing: /var/run/dpdk/spdk_pid69010 00:27:22.114 Removing: /var/run/dpdk/spdk_pid69112 00:27:22.114 Removing: /var/run/dpdk/spdk_pid69222 00:27:22.114 Removing: /var/run/dpdk/spdk_pid69270 00:27:22.114 Removing: /var/run/dpdk/spdk_pid69295 00:27:22.114 Removing: /var/run/dpdk/spdk_pid69590 00:27:22.114 Removing: /var/run/dpdk/spdk_pid69652 00:27:22.114 Removing: /var/run/dpdk/spdk_pid69732 00:27:22.114 Removing: /var/run/dpdk/spdk_pid70145 00:27:22.114 Removing: /var/run/dpdk/spdk_pid70286 00:27:22.114 Removing: /var/run/dpdk/spdk_pid71091 00:27:22.114 Removing: /var/run/dpdk/spdk_pid71228 00:27:22.114 Removing: /var/run/dpdk/spdk_pid71438 00:27:22.114 Removing: /var/run/dpdk/spdk_pid71542 00:27:22.114 Removing: /var/run/dpdk/spdk_pid71898 00:27:22.114 Removing: /var/run/dpdk/spdk_pid72189 00:27:22.114 Removing: /var/run/dpdk/spdk_pid72551 00:27:22.114 Removing: /var/run/dpdk/spdk_pid72749 00:27:22.114 Removing: /var/run/dpdk/spdk_pid72896 00:27:22.114 Removing: /var/run/dpdk/spdk_pid72949 00:27:22.374 Removing: /var/run/dpdk/spdk_pid73098 00:27:22.374 Removing: /var/run/dpdk/spdk_pid73127 00:27:22.374 Removing: /var/run/dpdk/spdk_pid73181 00:27:22.374 Removing: /var/run/dpdk/spdk_pid73400 00:27:22.374 Removing: /var/run/dpdk/spdk_pid73638 00:27:22.374 Removing: /var/run/dpdk/spdk_pid74101 00:27:22.374 Removing: /var/run/dpdk/spdk_pid74605 00:27:22.374 Removing: /var/run/dpdk/spdk_pid75081 00:27:22.374 Removing: /var/run/dpdk/spdk_pid75651 00:27:22.374 Removing: /var/run/dpdk/spdk_pid75788 00:27:22.374 Removing: /var/run/dpdk/spdk_pid75876 00:27:22.374 Removing: /var/run/dpdk/spdk_pid76588 00:27:22.374 Removing: /var/run/dpdk/spdk_pid76664 00:27:22.374 Removing: /var/run/dpdk/spdk_pid77193 00:27:22.374 Removing: /var/run/dpdk/spdk_pid77630 00:27:22.374 Removing: /var/run/dpdk/spdk_pid78195 00:27:22.374 Removing: /var/run/dpdk/spdk_pid78306 00:27:22.374 Removing: /var/run/dpdk/spdk_pid78361 00:27:22.374 Removing: /var/run/dpdk/spdk_pid78425 00:27:22.374 Removing: /var/run/dpdk/spdk_pid78491 00:27:22.374 Removing: /var/run/dpdk/spdk_pid78556 00:27:22.374 Removing: /var/run/dpdk/spdk_pid78767 00:27:22.374 Removing: /var/run/dpdk/spdk_pid78812 00:27:22.374 Removing: /var/run/dpdk/spdk_pid78882 00:27:22.374 Removing: /var/run/dpdk/spdk_pid78960 00:27:22.374 Removing: /var/run/dpdk/spdk_pid79000 00:27:22.374 Removing: /var/run/dpdk/spdk_pid79067 00:27:22.374 Removing: /var/run/dpdk/spdk_pid79175 00:27:22.374 Clean 00:27:22.374 killing process with pid 48455 00:27:22.374 killing process with pid 48457 00:27:22.374 21:13:43 -- common/autotest_common.sh@1446 -- # return 0 00:27:22.374 21:13:43 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:22.374 21:13:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.374 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:27:22.374 21:13:43 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:22.374 21:13:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.374 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:27:22.633 21:13:43 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:22.633 21:13:43 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:22.633 21:13:43 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:22.633 21:13:43 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:22.633 21:13:43 -- spdk/autotest.sh@383 -- # hostname 00:27:22.633 21:13:43 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:22.892 geninfo: WARNING: invalid characters removed from testname! 00:27:44.864 21:14:03 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:46.243 21:14:06 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:48.147 21:14:09 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:50.691 21:14:11 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:52.594 21:14:13 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:55.127 21:14:15 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:57.031 21:14:17 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:57.031 21:14:18 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:57.031 21:14:18 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:57.031 21:14:18 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:57.031 21:14:18 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:57.031 21:14:18 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:57.031 21:14:18 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:57.031 21:14:18 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:57.031 21:14:18 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:57.031 21:14:18 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:57.031 21:14:18 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:57.031 21:14:18 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:57.031 21:14:18 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:57.031 21:14:18 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:57.031 21:14:18 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:57.031 21:14:18 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:57.031 21:14:18 -- scripts/common.sh@343 -- $ case "$op" in 00:27:57.031 21:14:18 -- scripts/common.sh@344 -- $ : 1 00:27:57.031 21:14:18 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:57.031 21:14:18 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.031 21:14:18 -- scripts/common.sh@364 -- $ decimal 1 00:27:57.031 21:14:18 -- scripts/common.sh@352 -- $ local d=1 00:27:57.031 21:14:18 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:57.031 21:14:18 -- scripts/common.sh@354 -- $ echo 1 00:27:57.031 21:14:18 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:57.292 21:14:18 -- scripts/common.sh@365 -- $ decimal 2 00:27:57.292 21:14:18 -- scripts/common.sh@352 -- $ local d=2 00:27:57.292 21:14:18 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:57.292 21:14:18 -- scripts/common.sh@354 -- $ echo 2 00:27:57.292 21:14:18 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:57.292 21:14:18 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:57.292 21:14:18 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:57.292 21:14:18 -- scripts/common.sh@367 -- $ return 0 00:27:57.292 21:14:18 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.292 21:14:18 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:57.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.292 --rc genhtml_branch_coverage=1 00:27:57.292 --rc genhtml_function_coverage=1 00:27:57.292 --rc genhtml_legend=1 00:27:57.292 --rc geninfo_all_blocks=1 00:27:57.292 --rc geninfo_unexecuted_blocks=1 00:27:57.292 00:27:57.292 ' 00:27:57.292 21:14:18 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:57.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.292 --rc genhtml_branch_coverage=1 00:27:57.292 --rc genhtml_function_coverage=1 00:27:57.292 --rc genhtml_legend=1 00:27:57.292 --rc geninfo_all_blocks=1 00:27:57.292 --rc geninfo_unexecuted_blocks=1 00:27:57.292 00:27:57.292 ' 00:27:57.292 21:14:18 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:57.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.292 --rc genhtml_branch_coverage=1 00:27:57.292 --rc genhtml_function_coverage=1 00:27:57.292 --rc genhtml_legend=1 00:27:57.292 --rc geninfo_all_blocks=1 00:27:57.292 --rc geninfo_unexecuted_blocks=1 00:27:57.292 00:27:57.292 ' 00:27:57.292 21:14:18 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:57.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.292 --rc genhtml_branch_coverage=1 00:27:57.292 --rc genhtml_function_coverage=1 00:27:57.292 --rc genhtml_legend=1 00:27:57.292 --rc geninfo_all_blocks=1 00:27:57.292 --rc geninfo_unexecuted_blocks=1 00:27:57.292 00:27:57.292 ' 00:27:57.292 21:14:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:57.292 21:14:18 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:57.292 21:14:18 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.292 21:14:18 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.292 21:14:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.292 21:14:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.292 21:14:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.292 21:14:18 -- paths/export.sh@5 -- $ export PATH 00:27:57.292 21:14:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.292 21:14:18 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:57.292 21:14:18 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:57.292 21:14:18 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733692458.XXXXXX 00:27:57.292 21:14:18 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733692458.p4NSNF 00:27:57.292 21:14:18 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:57.292 21:14:18 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:27:57.292 21:14:18 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:57.292 21:14:18 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:57.292 21:14:18 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:57.292 21:14:18 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:57.292 21:14:18 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:57.292 21:14:18 -- common/autotest_common.sh@10 -- $ set +x 00:27:57.292 21:14:18 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:27:57.292 21:14:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:57.292 21:14:18 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:57.292 21:14:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:57.292 21:14:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:57.292 21:14:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:57.292 21:14:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:57.292 21:14:18 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:57.292 21:14:18 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:57.292 21:14:18 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:57.292 21:14:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:57.292 + [[ -n 5256 ]] 00:27:57.292 + sudo kill 5256 00:27:57.303 [Pipeline] } 00:27:57.319 [Pipeline] // timeout 00:27:57.324 [Pipeline] } 00:27:57.341 [Pipeline] // stage 00:27:57.347 [Pipeline] } 00:27:57.361 [Pipeline] // catchError 00:27:57.371 [Pipeline] stage 00:27:57.373 [Pipeline] { (Stop VM) 00:27:57.386 [Pipeline] sh 00:27:57.668 + vagrant halt 00:28:00.957 ==> default: Halting domain... 00:28:07.542 [Pipeline] sh 00:28:07.824 + vagrant destroy -f 00:28:10.358 ==> default: Removing domain... 00:28:10.938 [Pipeline] sh 00:28:11.219 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:11.230 [Pipeline] } 00:28:11.245 [Pipeline] // stage 00:28:11.251 [Pipeline] } 00:28:11.269 [Pipeline] // dir 00:28:11.275 [Pipeline] } 00:28:11.292 [Pipeline] // wrap 00:28:11.299 [Pipeline] } 00:28:11.314 [Pipeline] // catchError 00:28:11.325 [Pipeline] stage 00:28:11.327 [Pipeline] { (Epilogue) 00:28:11.340 [Pipeline] sh 00:28:11.666 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:15.906 [Pipeline] catchError 00:28:15.908 [Pipeline] { 00:28:15.920 [Pipeline] sh 00:28:16.200 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:16.458 Artifacts sizes are good 00:28:16.467 [Pipeline] } 00:28:16.481 [Pipeline] // catchError 00:28:16.491 [Pipeline] archiveArtifacts 00:28:16.497 Archiving artifacts 00:28:16.595 [Pipeline] cleanWs 00:28:16.606 [WS-CLEANUP] Deleting project workspace... 00:28:16.606 [WS-CLEANUP] Deferred wipeout is used... 00:28:16.613 [WS-CLEANUP] done 00:28:16.614 [Pipeline] } 00:28:16.628 [Pipeline] // stage 00:28:16.633 [Pipeline] } 00:28:16.644 [Pipeline] // node 00:28:16.649 [Pipeline] End of Pipeline 00:28:16.677 Finished: SUCCESS